Docker Model Runner will be available with the version 4.40 of Docker Desktop, coming soon...

To be honest, for a long time I wasn’t really into the AI/LLM hype. I started to use, as many, AI assistants in my IDE, but nothing more for some time.

And then, while working on Dagger (I’m building the Dagger Java SDK, you should give it a try ๐Ÿ˜‰) I started to play with LLMs. Because Dagger makes it so easy to work with them!

That was a first step.

But then, as a (fresh) Docker Captain, I got access to the new Docker Model Runner feature.

And that’s where I started to connect them both… with a lot of fun!

Docker Model Runner

Docker Model Runner is a new feature available on Docker Desktop. It aims to simplify the way you can run AI models on your machine, natively!

Yes, natively! So it means it will not run models in containers!

But it will make easy to run with containers.

Why that’s interesting to run your models locally?

โ†’ Because for instance you don’t want your data to escape your trusted machines!

And why that’s interesting to run them using Docker Model Runner while there’s already ollama, and llama.cpp, and vLLM, and gpt4all, and …?

โ†’ Because you don’t have to learn a new tool. If you are interested in interracting with containers, you probably already have Docker Desktop. And the pull and run commands are already in your muscle memory.

โ†’ Because it comes with a curated list of models, hosted on Hub AI namespace. It means your models are packaged as OCI artifacts ๐Ÿคฉ More about that in a next post!

To run models on your machine is heavily tied to your hardware. For this first version, it only runs on Apple's Metal APIs.

In short, it works on Apple Silicon machines.

The support of other hardware might come later.

Dagger

Dagger is a cross-platform composition engine. It started close to the CI/CD world. But it’s way more than that.

In short, it allows you to create small agents, as composable modules. There are a lot of gems everywhere in those few words.

It’s composable, modular… and runs on containers, using a common API.

โ†’ It means you can mix modules made out of different languages smoothly. For instance if your platform team works in Go, and you have a backend team in Java and a frontend team in Typescript, they can all create modules and reuse the ones from the others. Each in their own language. That’s a game changer!

โ†’ To quote Solomon Hykes, the creator of Docker and Dagger:

What if Bash stole the best ideas from Docker, Make and Nix?

Native support for containers and secrets; typed objects; declarative execution, sandboxed and cached by default… What if those were just standard shell features? ๐Ÿคฉ

Well, we built it! Say hello to Dagger Shell.

A Shell for the Container Age: Introducing Dagger Shell

And to go back to LLM, one of the primitives exposed by Dagger is LLM. And it makes things so easy and fun to work with! I mean, really!

When LLMs meet Containers

So to show what can be done with Docker Model Runner and Dagger, I’ve created a quick video presenting two small agents I’ve created:

  • One that takes a subreddit and summarize the last 24h posts
  • One that creates disposable dev environments, with no configuration, by analyzing the directory we are passing to the LLM

If you are at KubeCon in London, come join us at the Docker + Dagger event on April 2nd!

Kubecon London AI Meetup: Join Docker and Dagger for drinks and demos!