io.github.habedi/omni-lpr
An MCP server for automatic license plate recognition
★ 22MITdevtools
Install
Config snippet generator goes here (5 client tabs)
README
<div align="center">
<picture>
<img alt="Omni-LPR Logo" src="logo.svg" width="300">
</picture>
<br>
<h2>Omni-LPR</h2>
[](https://github.com/habedi/omni-lpr/actions/workflows/tests.yml)
[](https://codecov.io/gh/habedi/omni-lpr)
[](https://www.codefactor.io/repository/github/habedi/omni-lpr)
[](https://github.com/habedi/omni-lpr)
[](https://pypi.org/project/omni-lpr/)
[](https://github.com/habedi/omni-lpr/blob/main/LICENSE)
<br>
[](https://github.com/habedi/omni-lpr/tree/main/docs)
[](https://github.com/habedi/omni-lpr/tree/main/examples)
[](https://github.com/habedi/omni-lpr/pkgs/container/omni-lpr-cpu)
[](https://github.com/habedi/omni-lpr/pkgs/container/omni-lpr-openvino)
[](https://github.com/habedi/omni-lpr/pkgs/container/omni-lpr-cuda)
A multi-interface (REST and MCP) server for automatic license plate recognition
</div>
---
Omni-LPR is a self-hostable server that provides automatic license plate recognition (ALPR) capabilities via a REST API
and the Model Context Protocol (MCP). It can be used both as a standalone ALPR microservice and as an ALPR toolbox for
AI agents and large language models (LLMs).
### Why Omni-LPR?
Using Omni-LPR can have the following benefits:
- **Decoupling.** Your main application can be in any programming language. It doesn't need to be tangled up with Python
or specific ML dependencies because the server handles all of that.
- **Multiple Interfaces.** You aren't locked into one way of communicating. You can use a standard REST API from any
app, or you can use MCP, which is designed for AI agent integration.
- **Ready-to-Deploy.** You don't have to build it from scratch. There are pre-built Docker images that are easy to
deploy and start using immediately.
- **Hardware Acceleration.** The server is optimized for the hardware you have. It supports generic CPUs (ONNX), Intel
CPUs (OpenVINO), and NVIDIA GPUs (CUDA).
- **Asynchronous I/O.** It's built on Starlette, which means it has high-performance, non-blocking I/O. It can handle
many concurrent requests without getting bogged down.
- **Scalability.** Because it's a separate service, it can be scaled independently of your main application. If you
suddenly need more ALPR power, you can scale Omni-LPR up without touching anything else.
See the [ROADMAP.md](ROADMAP.md) for the list of implemented and planned features.
> [!IMPORTANT]
> Omni-LPR is in early development, so bugs and breaking API changes are expected.
> Please use the [issues page](https://github.com/habedi/omni-lpr/issues) to report bugs or request features.
---
### Quickstart
You can get started with Omni-LPR in a few minutes by following the steps described below.
#### 1. Install the Server
You can install Omni-LPR using `pip`:
```sh
pip install omni-lpr
```
#### 2. Start the Server
When installed, start the server with a single command:
```sh
omni-lpr
```
By default, the server will be listening on `http://127.0.0.1:8000`.
You can confirm it's running by accessing the health check endpoint:
```sh
curl http://127.0.0.1:8000/api/health
# Sample expected output: {"status": "ok", "version": "0.3.4"}
```
#### 3. Recognize a License Plate
Now you can make a request to recognize a license plate from an image.
The example below uses a publicly available image URL.
```sh
curl -X POST \
-H "Content-Type: application/json" \
-d '{"path": "https://www.olavsplates.com/foto_n/n_cx11111.jpg"}' \
http://127.0.0.1:8000/api/v1/tools/detect_and_recognize_plate_from_path/invoke
```
You should receive a JSON response with the detected license plate information.
### Usage
Omni-LPR exposes its capabilities as "tools" that can be called via a REST API or ove