MLX Local Inference Stack is an OpenClaw skill. Full local AI inference stack on Apple Silicon Macs via MLX. Includes: LLM chat (Qwen3-14B, Gemma3-12B), speech-to-text ASR (Qwen3-ASR, Whisper), text embedd.. It belongs to the Other collection. For background, see Triplit - Full-stack Local-first in our wiki.
MLX Local Inference Stack 语音转文字。
MLX Local Inference Stack has 112 downloads so far.
Installs in one command: `openclaw skill install mlx-local-inference`
Works inside your existing OpenClaw setup. No extra config.
Open-source. Community-maintained.
Installing MLX Local Inference Stack in OpenClaw takes just one command. Make sure you have OpenClaw set up and running before proceeding.
Run the following command in your terminal to add MLX Local Inference Stack to your OpenClaw instance:
openclaw skill install mlx-local-inference
Confirm the skill is properly installed and ready to use:
openclaw skill list
The skill is now available in your OpenClaw conversations. Simply describe what you want to accomplish, and OpenClaw will automatically invoke MLX Local Inference Stack when relevant.
What people do with MLX Local Inference Stack:
| Author | bendusy |
| Category | Other |
| Version | 2.2.0 |
| Updated | 2026-03-01 |
| Downloads | 112 |
| Score | 134 |
| Homepage | https://clawhub.ai/bendusy/mlx-local-inference |
MLX is a flexible and efficient array framework for numerical computing and machine learning on Apple silicon. We'll explore fundamental features including unified memory, lazy computation, and functi With MLX Local Inference Stack on OpenClaw, you can handle this directly from your AI assistant.
A production AI inference stack typically consists of the following three core components: End user application – Serves as the entry point for user interactions and request handling. Inference API se With MLX Local Inference Stack on OpenClaw, you can handle this directly from your AI assistant.
Apple MLX is a good choice for Mac users, researchers, and developers who want to experiment with machine learning models locally, test ideas quickly, or build on-device ML workflows without cloud dep With MLX Local Inference Stack on OpenClaw, you can handle this directly from your AI assistant.
Run "openclaw skill install mlx-local-inference" in your terminal. OpenClaw must be set up first. After install, the skill is available in your conversations automatically.
Yes. MLX Local Inference Stack is free and open-source. Install it from the OpenClaw skill directory at no cost. Maintained by bendusy.
Learn more from these authoritative sources:
Add MLX Local Inference Stack to your OpenClaw setup. One command. Done.
Install SkillDiscover other popular skills in the Other category.