mlx-examples/segment_anything
Shiyu 8353bbbf93
Segment Anything Model (#552)
* add segment anything model

* add readme

* reorg file structure

* update

* lint

* minor updates

* ack

* fix weight loading

* simplify

* fix to run notebooks

* amg in mlx

* remove torch dependency

* nit in README

* return indices in nms

* simplify

* bugfix / simplify

* fix bug'

* simplify

* fix notebook and remove output

* couple more nits

---------

Co-authored-by: Awni Hannun <awni@apple.com>
2024-06-02 16:45:51 -07:00
..
notebooks Segment Anything Model (#552) 2024-06-02 16:45:51 -07:00
segment_anything Segment Anything Model (#552) 2024-06-02 16:45:51 -07:00
convert.py Segment Anything Model (#552) 2024-06-02 16:45:51 -07:00
main.py Segment Anything Model (#552) 2024-06-02 16:45:51 -07:00
README.md Segment Anything Model (#552) 2024-06-02 16:45:51 -07:00
requirements.txt Segment Anything Model (#552) 2024-06-02 16:45:51 -07:00

Segment Anything

An implementation of the Segment Anything Model (SAM) in MLX. See the original repo by Meta AI for more details.1

Installation

pip install -r requirements.txt

Convert

python convert.py --hf-path facebook/sam-vit-base --mlx-path sam-vit-base

The safetensors weight file and configs are downloaded from Hugging Face, converted, and saved in the directory specified by --mlx-path.

The model sizes are:

  • facebook/sam-vit-base
  • facebook/sam-vit-large
  • facebook/sam-vit-huge

Run

See examples notebooks/predictor_example.ipynb and notebooks/automatic_mask_generator_example.ipynb to try the Segment Anything Model with MLX.

You can also generate masks from the command line:

python main.py --model <path/to/model> --input <image_or_folder> --output <path/to/output>

  1. The original Segment Anything GitHub repo. ↩︎