Track MCP LogoTrack MCP
Track MCP LogoTrack MCP

The world's largest repository of Model Context Protocol servers. Discover, explore, and submit MCP tools.

Product

  • Categories
  • Top MCP
  • New & Updated

Company

  • About

Legal

  • Privacy Policy
  • Terms of Service
  • Cookie Policy

© 2025 TrackMCP. All rights reserved.

Built with ❤️ by Krishna Goyal

    Localai

    :robot: The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and local-first. Drop-in replacement for OpenAI, running on consumer-gra...

    36,790 stars
    Go
    Updated Nov 4, 2025
    ai
    api
    audio-generation
    decentralized
    distributed
    gemma
    image-generation
    libp2p
    llama
    llm
    mamba
    mcp
    mistral
    musicgen
    object-detection
    rerank
    rwkv
    stable-diffusion
    text-generation
    tts

    Documentation

    :bulb: Get help - ❓FAQ 💭Discussions :speech_balloon: Discord :book: Documentation website

    💻 Quickstart 🖼️ Models 🚀 Roadmap 🛫 Examples Try on

    Telegram

    testsBuild and Releasebuild container imagesBump dependenciesArtifact Hub

    LocalAI is the free, Open Source OpenAI alternative. LocalAI act as a drop-in replacement REST API that's compatible with OpenAI (Elevenlabs, Anthropic... ) API specifications for local AI inferencing. It allows you to run LLMs, generate images, audio (and not only) locally or on-prem with consumer grade hardware, supporting multiple model families. Does not require GPU. It is created and maintained by Ettore Di Giacinto.

    📚🆕 Local Stack Family

    🆕 LocalAI is now part of a comprehensive suite of AI tools designed to work together:

    A powerful Local AI agent management platform that serves as a drop-in replacement for OpenAI's Responses API, enhanced with advanced agentic capabilities.

    A REST-ful API and knowledge base management system that provides persistent memory and storage capabilities for AI agents.

    Screenshots

    Talk InterfaceGenerate Audio
    Screenshot 2025-03-31 at 12-01-36 LocalAI - TalkScreenshot 2025-03-31 at 12-01-29 LocalAI - Generate audio with voice-en-us-ryan-low
    Models OverviewGenerate Images
    Screenshot 2025-03-31 at 12-01-20 LocalAI - ModelsScreenshot 2025-03-31 at 12-31-41 LocalAI - Generate images with flux 1-dev
    Chat InterfaceHome
    Screenshot 2025-03-31 at 11-57-44 LocalAI - Chat with localai-functioncall-qwen2 5-7b-v0 5Screenshot 2025-03-31 at 11-57-23 LocalAI API - c2a39e3 (c2a39e3639227cfd94ffffe9f5691239acc275a8)
    LoginSwarm
    Screenshot 2025-03-31 at 12-09-59 Screenshot 2025-03-31 at 12-10-39 LocalAI - P2P dashboard

    💻 Quickstart

    Run the installer script:

    bash
    # Basic installation
    curl https://localai.io/install.sh | sh

    For more installation options, see Installer Options.

    macOS Download:

    Note: the DMGs are not signed by Apple as quarantined. See https://github.com/mudler/LocalAI/issues/6268 for a workaround, fix is tracked here: https://github.com/mudler/LocalAI/issues/6244

    Or run with docker:

    💡 Docker Run vs Docker Start

    - docker run creates and starts a new container. If a container with the same name already exists, this command will fail.

    - docker start starts an existing container that was previously created with docker run.

    If you've already run LocalAI before and want to start it again, use: docker start -i local-ai

    CPU only image:

    bash
    docker run -ti --name local-ai -p 8080:8080 localai/localai:latest

    NVIDIA GPU Images:

    bash
    # CUDA 12.0
    docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-gpu-nvidia-cuda-12
    
    # CUDA 11.7
    docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-gpu-nvidia-cuda-11
    
    # NVIDIA Jetson (L4T) ARM64
    docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-nvidia-l4t-arm64

    AMD GPU Images (ROCm):

    bash
    docker run -ti --name local-ai -p 8080:8080 --device=/dev/kfd --device=/dev/dri --group-add=video localai/localai:latest-gpu-hipblas

    Intel GPU Images (oneAPI):

    bash
    docker run -ti --name local-ai -p 8080:8080 --device=/dev/dri/card1 --device=/dev/dri/renderD128 localai/localai:latest-gpu-intel

    Vulkan GPU Images:

    bash
    docker run -ti --name local-ai -p 8080:8080 localai/localai:latest-gpu-vulkan

    AIO Images (pre-downloaded models):

    bash
    # CPU version
    docker run -ti --name local-ai -p 8080:8080 localai/localai:latest-aio-cpu
    
    # NVIDIA CUDA 12 version
    docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-aio-gpu-nvidia-cuda-12
    
    # NVIDIA CUDA 11 version
    docker run -ti --name local-ai -p 8080:8080 --gpus all localai/localai:latest-aio-gpu-nvidia-cuda-11
    
    # Intel GPU version
    docker run -ti --name local-ai -p 8080:8080 localai/localai:latest-aio-gpu-intel
    
    # AMD GPU version
    docker run -ti --name local-ai -p 8080:8080 --device=/dev/kfd --device=/dev/dri --group-add=video localai/localai:latest-aio-gpu-hipblas

    For more information about the AIO images and pre-downloaded models, see Container Documentation.

    To load models:

    bash
    # From the model gallery (see available models with `local-ai models list`, in the WebUI from the model tab, or visiting https://models.localai.io)
    local-ai run llama-3.2-1b-instruct:q4_k_m
    # Start LocalAI with the phi-2 model directly from huggingface
    local-ai run huggingface://TheBloke/phi-2-GGUF/phi-2.Q8_0.gguf
    # Install and run a model from the Ollama OCI registry
    local-ai run ollama://gemma:2b
    # Run a model from a configuration file
    local-ai run https://gist.githubusercontent.com/.../phi-2.yaml
    # Install and run a model from a standard OCI registry (e.g., Docker Hub)
    local-ai run oci://localai/phi-2:latest

    ⚡ Automatic Backend Detection: When you install models from the gallery or YAML files, LocalAI automatically detects your system's GPU capabilities (NVIDIA, AMD, Intel) and downloads the appropriate backend. For advanced configuration options, see GPU Acceleration.

    For more information, see 💻 Getting started, if you are interested in our roadmap items and future enhancements, you can see the Issues labeled as Roadmap here

    📰 Latest project news

    • October 2025: 🔌 Model Context Protocol (MCP) support added for agentic capabilities with external tools
    • September 2025: New Launcher application for MacOS and Linux, extended support to many backends for Mac and Nvidia L4T devices. Models: Added MLX-Audio, WAN 2.2. WebUI improvements and Python-based backends now ships portable python environments.
    • August 2025: MLX, MLX-VLM, Diffusers and llama.cpp are now supported on Mac M1/M2/M3+ chips ( with development suffix in the gallery ): https://github.com/mudler/LocalAI/pull/6049 https://github.com/mudler/LocalAI/pull/6119 https://github.com/mudler/LocalAI/pull/6121 https://github.com/mudler/LocalAI/pull/6060
    • July/August 2025: 🔍 Object Detection added to the API featuring rf-detr
    • July 2025: All backends migrated outside of the main binary. LocalAI is now more lightweight, small, and automatically downloads the required backend to run the model. Read the release notes
    • June 2025: Backend management has been added. Attention: extras images are going to be deprecated from the next release! Read the backend management PR.
    • May 2025: Audio input and Reranking in llama.cpp backend, Realtime API, Support to Gemma, SmollVLM, and more multimodal models (available in the gallery).
    • May 2025: Important: image name changes See release
    • Apr 2025: Rebrand, WebUI enhancements
    • Apr 2025: LocalAGI and LocalRecall join the LocalAI family stack.
    • Apr 2025: WebUI overhaul, AIO images updates
    • Feb 2025: Backend cleanup, Breaking changes, new backends (kokoro, OutelTTS, faster-whisper), Nvidia L4T images
    • Jan 2025: LocalAI model release: https://huggingface.co/mudler/LocalAI-functioncall-phi-4-v0.3, SANA support in diffusers: https://github.com/mudler/LocalAI/pull/4603
    • Dec 2024: stablediffusion.cpp backend (ggml) added ( https://github.com/mudler/LocalAI/pull/4289 )
    • Nov 2024: Bark.cpp backend added ( https://github.com/mudler/LocalAI/pull/4287 )
    • Nov 2024: Voice activity detection models (VAD) added to the API: https://github.com/mudler/LocalAI/pull/4204
    • Oct 2024: examples moved to LocalAI-examples
    • Aug 2024: 🆕 FLUX-1, P2P Explorer
    • July 2024: 🔥🔥 🆕 P2P Dashboard, LocalAI Federated mode and AI Swarms: https://github.com/mudler/LocalAI/pull/2723. P2P Global community pools: https://github.com/mudler/LocalAI/issues/3113
    • May 2024: 🔥🔥 Decentralized P2P llama.cpp: https://github.com/mudler/LocalAI/pull/2343 (peer2peer llama.cpp!) 👉 Docs https://localai.io/features/distribute/
    • May 2024: 🔥🔥 Distributed inferencing: https://github.com/mudler/LocalAI/pull/2324
    • April 2024: Reranker API: https://github.com/mudler/LocalAI/pull/2121

    Roadmap items: List of issues

    🚀 Features

    • 🧩 Backend Gallery: Install/remove backends on the fly, powered by OCI images — fully customizable and API-driven.
    • 📖 Text generation with GPTs (llama.cpp, transformers, vllm ... :book: and more)
    • 🗣 Text to Audio
    • 🔈 Audio to Text (Audio transcription with whisper.cpp)
    • 🎨 Image generation
    • 🔥 OpenAI-alike tools API
    • 🧠 Embeddings generation for vector databases
    • ✍️ Constrained grammars
    • 🖼️ Download Models directly from Huggingface
    • 🥽 Vision API
    • 🔍 Object Detection
    • 📈 Reranker API
    • 🆕🖧 P2P Inferencing
    • 🆕🔌 Model Context Protocol (MCP) - Agentic capabilities with external tools and LocalAGI's Agentic capabilities
    • 🔊 Voice activity detection (Silero-VAD support)
    • 🌍 Integrated WebUI!

    🧩 Supported Backends & Acceleration

    LocalAI supports a comprehensive range of AI backends with multiple acceleration options:

    Text Generation & Language Models

    BackendDescriptionAcceleration Support
    llama.cppLLM inference in C/C++CUDA 11/12, ROCm, Intel SYCL, Vulkan, Metal, CPU
    vLLMFast LLM inference with PagedAttentionCUDA 12, ROCm, Intel
    transformersHuggingFace transformers frameworkCUDA 11/12, ROCm, Intel, CPU
    exllama2GPTQ inference libraryCUDA 12
    MLXApple Silicon LLM inferenceMetal (M1/M2/M3+)
    MLX-VLMApple Silicon Vision-Language ModelsMetal (M1/M2/M3+)

    Audio & Speech Processing

    BackendDescriptionAcceleration Support
    whisper.cppOpenAI Whisper in C/C++CUDA 12, ROCm, Intel SYCL, Vulkan, CPU
    faster-whisperFast Whisper with CTranslate2CUDA 12, ROCm, Intel, CPU
    barkText-to-audio generationCUDA 12, ROCm, Intel
    bark-cppC++ implementation of BarkCUDA, Metal, CPU
    coquiAdvanced TTS with 1100+ languagesCUDA 12, ROCm, Intel, CPU
    kokoroLightweight TTS modelCUDA 12, ROCm, Intel, CPU
    chatterboxProduction-grade TTSCUDA 11/12, CPU
    piperFast neural TTS systemCPU
    kitten-ttsKitten TTS modelsCPU
    silero-vadVoice Activity DetectionCPU
    neuttsText-to-speech with voice cloningCUDA 12, ROCm, CPU

    Image & Video Generation

    BackendDescriptionAcceleration Support
    stablediffusion.cppStable Diffusion in C/C++CUDA 12, Intel SYCL, Vulkan, CPU
    diffusersHuggingFace diffusion modelsCUDA 11/12, ROCm, Intel, Metal, CPU

    Specialized AI Tasks

    BackendDescriptionAcceleration Support
    rfdetrReal-time object detectionCUDA 12, Intel, CPU
    rerankersDocument reranking APICUDA 11/12, ROCm, Intel, CPU
    local-storeVector databaseCPU
    huggingfaceHuggingFace API integrationAPI-based

    Hardware Acceleration Matrix

    Acceleration TypeSupported BackendsHardware Support
    NVIDIA CUDA 11llama.cpp, whisper, stablediffusion, diffusers, rerankers, bark, chatterboxNvidia hardware
    NVIDIA CUDA 12All CUDA-compatible backendsNvidia hardware
    AMD ROCmllama.cpp, whisper, vllm, transformers, diffusers, rerankers, coqui, kokoro, bark, neuttsAMD Graphics
    Intel oneAPIllama.cpp, whisper, stablediffusion, vllm, transformers, diffusers, rfdetr, rerankers, exllama2, coqui, kokoro, barkIntel Arc, Intel iGPUs
    Apple Metalllama.cpp, whisper, diffusers, MLX, MLX-VLM, bark-cppApple M1/M2/M3+
    Vulkanllama.cpp, whisper, stablediffusionCross-platform GPUs
    NVIDIA Jetsonllama.cpp, whisper, stablediffusion, diffusers, rfdetrARM64 embedded AI
    CPU OptimizedAll backendsAVX/AVX2/AVX512, quantization support

    🔗 Community and integrations

    Build and deploy custom containers:

    • https://github.com/sozercan/aikit

    WebUIs:

    • https://github.com/Jirubizu/localai-admin
    • https://github.com/go-skynet/LocalAI-frontend
    • QA-Pilot(An interactive chat project that leverages LocalAI LLMs for rapid understanding and navigation of GitHub code repository) https://github.com/reid41/QA-Pilot

    Agentic Libraries:

    • https://github.com/mudler/cogito

    MCPs:

    • https://github.com/mudler/MCPs

    Model galleries

    • https://github.com/go-skynet/model-gallery

    Voice:

    • https://github.com/richiejp/VoxInput

    Other:

    • Helm chart https://github.com/go-skynet/helm-charts
    • VSCode extension https://github.com/badgooooor/localai-vscode-plugin
    • Langchain: https://python.langchain.com/docs/integrations/providers/localai/
    • Terminal utility https://github.com/djcopley/ShellOracle
    • Local Smart assistant https://github.com/mudler/LocalAGI
    • Home Assistant https://github.com/sammcj/homeassistant-localai / https://github.com/drndos/hass-openai-custom-conversation / https://github.com/valentinfrlch/ha-gpt4vision
    • Discord bot https://github.com/mudler/LocalAGI/tree/main/examples/discord
    • Slack bot https://github.com/mudler/LocalAGI/tree/main/examples/slack
    • Shell-Pilot(Interact with LLM using LocalAI models via pure shell scripts on your Linux or MacOS system) https://github.com/reid41/shell-pilot
    • Telegram bot https://github.com/mudler/LocalAI/tree/master/examples/telegram-bot
    • Another Telegram Bot https://github.com/JackBekket/Hellper
    • Auto-documentation https://github.com/JackBekket/Reflexia
    • Github bot which answer on issues, with code and documentation as context https://github.com/JackBekket/GitHelper
    • Github Actions: https://github.com/marketplace/actions/start-localai
    • Examples: https://github.com/mudler/LocalAI/tree/master/examples/

    🔗 Resources

    • LLM finetuning guide
    • How to build locally
    • How to install in Kubernetes
    • Projects integrating LocalAI
    • How tos section (curated by our community)

    :book: 🎥 Media, Blogs, Social

    • Run Visual studio code with LocalAI (SUSE)
    • 🆕 Run LocalAI on Jetson Nano Devkit
    • Run LocalAI on AWS EKS with Pulumi
    • Run LocalAI on AWS
    • Create a slackbot for teams and OSS projects that answer to documentation
    • LocalAI meets k8sgpt
    • Question Answering on Documents locally with LangChain, LocalAI, Chroma, and GPT4All
    • Tutorial to use k8sgpt with LocalAI

    Citation

    If you utilize this repository, data in a downstream project, please consider citing it with:

    code
    @misc{localai,
      author = {Ettore Di Giacinto},
      title = {LocalAI: The free, Open source OpenAI alternative},
      year = {2023},
      publisher = {GitHub},
      journal = {GitHub repository},
      howpublished = {\url{https://github.com/go-skynet/LocalAI}},

    ❤️ Sponsors

    Do you find LocalAI useful?

    Support the project by becoming a backer or sponsor. Your logo will show up here with a link to your website.

    A huge thank you to our generous sponsors who support this project covering CI expenses, and our Sponsor list:

    🌟 Star history

    LocalAI Star history Chart

    📖 License

    LocalAI is a community-driven project created by Ettore Di Giacinto.

    MIT - Author Ettore Di Giacinto

    🙇 Acknowledgements

    LocalAI couldn't have been built without the help of great software already available from the community. Thank you!

    • llama.cpp
    • https://github.com/tatsu-lab/stanford_alpaca
    • https://github.com/cornelk/llama-go for the initial ideas
    • https://github.com/antimatter15/alpaca.cpp
    • https://github.com/EdVince/Stable-Diffusion-NCNN
    • https://github.com/ggerganov/whisper.cpp
    • https://github.com/rhasspy/piper

    🤗 Contributors

    This is a community project, a special thanks to our contributors! 🤗

    Similar MCP

    Based on tags & features

    • AN

      Anyquery

      Go·
      1.4k
    • AN

      Anilist Mcp

      TypeScript·
      57
    • FA

      Fal Mcp Server

      Python·
      8
    • MC

      Mcp Ipfs

      TypeScript·
      11

    Trending MCP

    Most active this week

    • PL

      Playwright Mcp

      TypeScript·
      22.1k
    • SE

      Serena

      Python·
      14.5k
    • MC

      Mcp Playwright

      TypeScript·
      4.9k
    • MC

      Mcp Server Cloudflare

      TypeScript·
      3.0k
    View All MCP Servers

    Similar MCP

    Based on tags & features

    • AN

      Anyquery

      Go·
      1.4k
    • AN

      Anilist Mcp

      TypeScript·
      57
    • FA

      Fal Mcp Server

      Python·
      8
    • MC

      Mcp Ipfs

      TypeScript·
      11

    Trending MCP

    Most active this week

    • PL

      Playwright Mcp

      TypeScript·
      22.1k
    • SE

      Serena

      Python·
      14.5k
    • MC

      Mcp Playwright

      TypeScript·
      4.9k
    • MC

      Mcp Server Cloudflare

      TypeScript·
      3.0k