Project Icon

nnDetection

自适应医学目标检测框架

nnDetection是一个自适应医学目标检测框架,能够自动配置以适应不同医学检测任务。该框架在ADAM和LUNA16等公共基准测试中展现出与顶尖方法相当或更优的性能。项目支持Docker容器和本地安装,提供多个医学数据集的处理指南,便于复现实验结果和集成新数据集。nnDetection为医学目标检测研究提供了标准化接口和自动化工作流程。

<img src=docs/source/nnDetection.svg width="600px">

Version Python CUDA

What is nnDetection?

Simultaneous localisation and categorization of objects in medical images, also referred to as medical object detection, is of high clinical relevance because diagnostic decisions depend on rating of objects rather than e.g. pixels. For this task, the cumbersome and iterative process of method configuration constitutes a major research bottleneck. Recently, nnU-Net has tackled this challenge for the task of image segmentation with great success. Following nnU-Net’s agenda, in this work we systematize and automate the configuration process for medical object detection. The resulting self-configuring method, nnDetection, adapts itself without any manual intervention to arbitrary medical detection problems while achieving results en par with or superior to the state-of-the-art. We demonstrate the effectiveness of nnDetection on two public benchmarks, ADAM and LUNA16, and propose 10 further public data sets for a comprehensive evaluation of medical object detection methods.

If you use nnDetection please cite our paper:

Baumgartner M., Jäger P.F., Isensee F., Maier-Hein K.H. (2021) nnDetection: A Self-configuring Method for Medical Object Detection. In: de Bruijne M. et al. (eds) Medical Image Computing and Computer Assisted Intervention – MICCAI 2021. MICCAI 2021. Lecture Notes in Computer Science, vol 12905. Springer, Cham. https://doi.org/10.1007/978-3-030-87240-3_51

:tada: nnDetection was early accepted to the International Conference on Medical Image Computing & Computer Assisted Intervention 2021 (MICCAI21) :tada:

Installation

Docker

The easiest way to get started with nnDetection is the provided is to build a Docker Container with the provided Dockerfile.

Please install docker and nvidia-docker2 before continuing.

All projects which are based on nnDetection assume that the base image was built with the following tagging scheme nnDetection:[version]. To build a container (nnDetection Version 0.1) run the following command from the base directory:

docker build -t nndetection:0.1 --build-arg env_det_num_threads=6 --build-arg env_det_verbose=1 .

(--build-arg env_det_num_threads=6 and --build-arg env_det_verbose=1 are optional and are used to overwrite the provided default parameters)

The docker container expects data and models in its own /opt/data and /opt/models directories respectively. The directories need to be mounted via docker -v. For simplicity and speed, the ENV variables det_data and det_models can be set in the host system to point to the desired directories. To run:

docker run --gpus all -v ${det_data}:/opt/data -v ${det_models}:/opt/models -it --shm-size=24gb nndetection:0.1 /bin/bash

Warning: When running a training inside the container it is necessary to increase the shared memory (via --shm-size).

Local

To create a working environment locally with conda, please run:

conda create --name nndet_venv python=3.8
conda activate nndet_venv

Now run the following commands to properly set it up:

git clone https://github.com/MIC-DKFZ/nnDetection.git
cd nnDetection

export CXX=$CONDA_PREFIX/bin/x86_64-conda_cos6-linux-gnu-c++
export CC=$CONDA_PREFIX/bin/x86_64-conda_cos6-linux-gnu-cc

conda install gxx_linux-64==9.3.0
conda install cuda -c nvidia/label/cuda-11.3.1
conda install pytorch==1.11.0 torchvision==0.12.0 torchaudio==0.11.0 cudatoolkit=11.3 -c pytorch

pip install -r requirements.txt  \
  && pip install hydra-core --upgrade --pre \
  && pip install git+https://github.com/mibaumgartner/pytorch_model_summary.git
FORCE_CUDA=1 pip install -v -e .

Source

Please note that nndetection requires Python 3.8+. Please use PyTorch 1.X version for now and not 2.0

  1. Install CUDA (>10.1) and cudnn (make sure to select compatible versions!)
  2. [Optional] Depending on your GPU you might need to set TORCH_CUDA_ARCH_LIST, check compute capabilities here.
  3. Install torch (make sure to match the pytorch and CUDA versions!) (requires pytorch >1.10+) and torchvision(make sure to match the versions!).
  4. Clone nnDetection, cd [path_to_repo] and pip install -e .
  5. Set environment variables (more info can be found below):
    • det_data: [required] Path to the source directory where all the data will be located
    • det_models: [required] Path to directory where all models will be saved
    • OMP_NUM_THREADS=1 : [required] Needs to be set! Otherwise bad things will happen... Refer to batchgenerators documentation.
    • det_num_threads: [recommended] Number processes to use for augmentation (at least 6, default 12)
    • det_verbose: [optional] Can be used to deactivate progress bars (activated by default)
    • MLFLOW_TRACKING_URI: [optional] Specify the logging directory of mlflow. Refer to the mlflow documentation for more information.

Note: nnDetection was developed on Linux => Windows is not supported.

Test Installation
Run the following command in the terminal (!not! in pytorch root folder) to verify that the compilation of the C++/CUDA code was successfull:
python -c "import torch; import nndet._C; import nndet"

To test the whole installation please run the Toy Data set example.

Maximising Training Speed
To get the best possible performance we recommend using CUDA 11.0+ with cuDNN 8.1.X+ and a (!)locally compiled version(!) of Pytorch 1.7+

nnDetection

nnDetection Module Overview

nnDetection uses multiple Registries to keep track of different modules and easily switch between them via the config files.

Config Files nnDetection uses Hydra to dynamically configure and compose configurations. The configuration files are located in nndet.conf and can be overwritten to customize the behavior of the pipeline.

AUGMENTATION_REGISTRY The augmentation registry can be imported from nndet.io.augmentation and contains different augmentation configurations. Examples can be found in nndet.io.augmentation.bg_aug.

DATALOADER_REGISTRY The dataloader registry contains different dataloader classes to customize the IO of nnDetection. It can be imported from nndet.io.datamodule and examples can be found in nndet.io.datamodule.bg_loader.

PLANNER_REGISTRY New plans can be registered via the planner registry which contains classes to define and perform different architecture and preprocessing schemes. It can be imported from nndet.planning.experiment and examples can be found in nndet.planning.experiment.v001.

MODULE_REGISTRY The module registry contains the core modules of nnDetection which inherits from the Pytorch Lightning Module. It is the main module which is used for training and inference and contains all the necessary steps to build the final models. It can be imported from nndet.ptmodule and examples can be found in nndet.ptmodule.retinaunet.

nnDetection Functional Details

Experiments & Data

The data sets used for our experiments are not hosted or maintained by us, please give credit to the authors of the data sets. Some of the labels were corrected in data sets which we converted and can be downloaded (links can be found in the guides). The Experiments section contains multiple guides which explain the preparation of the data sets via the provided scripts.

Toy Data set

Running nndet_example will automatically generate an example data set with 3D squares and sqaures with holes which can be used to test the installation or experiment with prototype code (it is still necessary to run the other nndet commands to process/train/predict the data set).

# create data to test installation/environment (10 train 10 test)
nndet_example

# create full data set for prototyping (1000 train 1000 test)
nndet_example --full [--num_processes]

The full problem is very easy and the final results should be near perfect. After running the generation script follow the Planning, Training and Inference instructions below to construct the whole nnDetection pipeline.

Guides

Work in progress

Experiments

Besides the self-configuring method, nnDetection acts as a standard interface for many data sets. We provide guides to prepare all data sets from our evaluation to the correct and make it easy to reproduce our resutls. Furthermore, we provide pretrained models which can be used without investing large amounts of compute to rerun our experiments (see Section Pretrained Models).

Adding New Data sets

nnDetection relies on a standardized input format which is very similar to nnU-Net and allows easy integration of new data sets. More details about the format can be found below.

Folders

All data sets should reside inside Task[Number]_[Name] folders inside the specified detection data folder (the path to this folder can be set via the det_data environment flag). To avoid conflicts with our provided pretrained models we recommend to use task numbers starting from 100. An overview is provided below ([Name] symbolise folders, - symbolise files, indents refer to substructures)

Warning[!]: Please avoid any . inside file names/folder names/paths since it can influence how paths/names are splitted.

${det_data}
    [Task000_Example]
        - dataset.yaml # dataset.json works too
        [raw_splitted]
            [imagesTr]
                - case0000_0000.nii.gz # case0000 modality 0
                - case0000_0001.nii.gz # case0000 modality 1
                - case0001_0000.nii.gz # case0001 modality 0
                - case0000_0001.nii.gz # case0001 modality 1
            [labelsTr]
                - case0000.nii.gz # instance segmentation case0000
                - case0000.json # properties of case0000
                - case0001.nii.gz # instance segmentation case0001
                - case0001.json # properties of case0001
            [imagesTs] # optional, same structure as imagesTr
             ...
            [labelsTs] # optional, same structure as labelsTr
             ...
    [Task001_Example1]
        ...

Data set Info

dataset.yaml or dataset.json provides general information about the data set: Note: [Important] Classes and modalities start with index 0!

task: Task000D3_Example

name: "Example" # [Optional]
dim: 3 # number of spatial dimensions of the data

# Note: need to use integer value which is defined below of target class!
target_class: 1 # [Optional] define class of interest for patient level evaluations
test_labels: True # manually splitted test set

labels: # classes of data set; need to start at 0
    "0": "Square"
    "1": "SquareHole"

modalities: # modalities of data set; need to start at 0
    "0": "CT"

Image Format

nnDetection uses the same image format as nnU-Net. Each case consists of at least one 3D nifty file with a single modality and are saved in the images folders. If multiple modalities are available, each modality uses a separate file and the sequence number at the end of the name indicates the modality (these need to correspond to the numbers specified in the data set file and be consistent across the whole data set).

An example with two modalities could look like this:

- case001_0000.nii.gz # Case ID: case001; Modality: 0
- case001_0001.nii.gz # Case ID: case001; Modality: 1

- case002_0000.nii.gz # Case ID: case002; Modality: 0
- case002_0001.nii.gz # Case ID: case002; Modality: 1

If multiple modalities are available, please check beforehand if they need to be registered and perform registration befor nnDetection preprocessing. nnDetection does (!)not(!) include automatic registration of multiple modalities.

Label Format

Labels are encoded with two files per case: one nifty file which contains the instance segmentation and one json file which includes the "meta" information of each instance. The nifty file should contain all annotated instances where each instance has a unique number and are in consecutive order (e.g. 0 ALWAYS refers to background, 1 refers to the first instance, 2 refers to the second instance ...) case[XXXX].json label files need to provide the class of every instance in the segmentation. In this example the first isntance is assigned to class 0 and the second instance is assigned to class 1:

{
    "instances": {
        "1": 0,
        "2": 1
    }
}

Each label file needs a corresponding json file to define the classes. We also wrote an Detection Annotation Guide which includes a dedicated section of the nnDetection format with additional visualizations :)

Using nnDetection

The following paragrah provides an high level overview of the functionality of nnDetection and which commands are available. A typical flow of commands would look like this:

nndet_prep -> nndet_unpack -> nndet_train -> nndet_consolidate -> nndet_predict

Eachs of this commands is explained below and more detailt information can be obtained by running nndet_[command] -h in the terminal.

Planning & Preprocessing

Before training the networks, nnDetection needs to preprocess and analyze the data. The preprocessing stage normalizes and resamples the data while the analyzed properties are used to create a plan which will be used for configuring the training. nnDetectionV0 requires a GPU with approximately the same amount of VRAM you are planning to use for training (we used a RTX2080TI; no monitor attached to it) to perform live estimation of the VRAM used by the network. Future releases aim at improving this process...

nndet_prep [tasks] [-o / --overwrites] [-np / --num_processes] [-npp / --num_processes_preprocessing] [--full_check]

# Example
nndet_prep 000

# Script
# /scripts/preprocess.py - main()

-o option can be used to overwrite parameters for planning

项目侧边栏1项目侧边栏2
推荐项目
Project Cover

豆包MarsCode

豆包 MarsCode 是一款革命性的编程助手,通过AI技术提供代码补全、单测生成、代码解释和智能问答等功能,支持100+编程语言,与主流编辑器无缝集成,显著提升开发效率和代码质量。

Project Cover

AI写歌

Suno AI是一个革命性的AI音乐创作平台,能在短短30秒内帮助用户创作出一首完整的歌曲。无论是寻找创作灵感还是需要快速制作音乐,Suno AI都是音乐爱好者和专业人士的理想选择。

Project Cover

有言AI

有言平台提供一站式AIGC视频创作解决方案,通过智能技术简化视频制作流程。无论是企业宣传还是个人分享,有言都能帮助用户快速、轻松地制作出专业级别的视频内容。

Project Cover

Kimi

Kimi AI助手提供多语言对话支持,能够阅读和理解用户上传的文件内容,解析网页信息,并结合搜索结果为用户提供详尽的答案。无论是日常咨询还是专业问题,Kimi都能以友好、专业的方式提供帮助。

Project Cover

阿里绘蛙

绘蛙是阿里巴巴集团推出的革命性AI电商营销平台。利用尖端人工智能技术,为商家提供一键生成商品图和营销文案的服务,显著提升内容创作效率和营销效果。适用于淘宝、天猫等电商平台,让商品第一时间被种草。

Project Cover

吐司

探索Tensor.Art平台的独特AI模型,免费访问各种图像生成与AI训练工具,从Stable Diffusion等基础模型开始,轻松实现创新图像生成。体验前沿的AI技术,推动个人和企业的创新发展。

Project Cover

SubCat字幕猫

SubCat字幕猫APP是一款创新的视频播放器,它将改变您观看视频的方式!SubCat结合了先进的人工智能技术,为您提供即时视频字幕翻译,无论是本地视频还是网络流媒体,让您轻松享受各种语言的内容。

Project Cover

美间AI

美间AI创意设计平台,利用前沿AI技术,为设计师和营销人员提供一站式设计解决方案。从智能海报到3D效果图,再到文案生成,美间让创意设计更简单、更高效。

Project Cover

AIWritePaper论文写作

AIWritePaper论文写作是一站式AI论文写作辅助工具,简化了选题、文献检索至论文撰写的整个过程。通过简单设定,平台可快速生成高质量论文大纲和全文,配合图表、参考文献等一应俱全,同时提供开题报告和答辩PPT等增值服务,保障数据安全,有效提升写作效率和论文质量。

投诉举报邮箱: service@vectorlightyear.com
@2024 懂AI·鲁ICP备2024100362号-6·鲁公网安备37021002001498号