连接Segment-Anything与CLIP
我们的目标是使用现成的CLIP模型对segment-anything的输出掩码进行分类。每个掩码对应的裁剪图像会被发送到CLIP模型。
其他优秀作品
编辑相关工作
- sail-sg/EditAnything
- IDEA-Research/Grounded-Segment-Anything
- geekyutao/Inpaint-Anything
- Luodian/RelateAnything
Nerf相关工作
- ashawkey/Segment-Anything-NeRF
- Anything-of-anything/Anything-3D
- Jun-CEN/SegmentAnyRGBD
- Pointcept/SegmentAnything3D
分割相关工作
- maxi-w/CLIP-SAM
- Curt-Park/segment-anything-with-clip
- kadirnar/segment-anything-video
- fudan-zvg/Semantic-Segment-Anything
- continue-revolution/sd-webui-segment-anything
- RockeyCoss/Prompt-Segment-Anything
- ttengwang/Caption-Anything
- ngthanhtin/owlvit_segment_anything
- lang-segment-anything
- helblazer811/RefSAM
- Hedlen/awesome-segment-anything
- ziqi-jin/finetune-anythin
- ylqi/Count-Anything
- xmed-lab/CLIP_Surgery
- RockeyCoss/Prompt-Segment-Anything
- segments-ai/panoptic-segment-anything
- Cheems-Seminar/grounded-segment-any-parts
- aim-uofa/Matcher
- SysCV/sam-hq
- CASIA-IVA-Lab/FastSAM
- ChaoningZhang/MobileSAM
- JamesQFreeman/Sam_LoRA
- UX-Decoder/Semantic-SAM
- cskyl/SAM_WSSS
- ggsDing/SAM-CD
- yformer/EfficientSAM
标注相关工作
追踪相关工作
医学相关工作
待办事项
- 我们计划将segment-anything与MaskCLIP连接起来。
- 我们计划在COCO和LVIS数据集上进行微调。
运行演示
从SAM存储库下载sam_vit_h_4b8939.pth模型,并将其放置在./SAM-CLIP/
目录下。按照说明使用以下命令安装segment-anything和clip软件包。
cd SAM-CLIP; pip install -e .
pip install git+https://github.com/openai/CLIP.git
然后运行以下脚本:
sh run.sh
示例
向SAM模型输入一张示例图像和一个点(250, 250)。输入图像和输出的三个掩码如下所示:
三个掩码及其对应的预测类别如下所示:
您可以在scripts/amp_points.py
的第273-274行更改点的位置。
## 输入点
input_points_list = [[250, 250]]
label_list = [1]