//*********************************************** //***************** SETTINGS ******************** //***********************************************
:doctype: book :use-link-attrs: :linkattrs:
// Github Icons ifdef::env-github[] :tip-caption: :bulb: :note-caption: :information_source: :important-caption: :heavy_exclamation_mark: :caution-caption: :fire: :warning-caption: :warning: endif::[]
// Table of Contents :toc: :toclevels: 2 :toc-title: :toc-placement!: :sectanchors:
// Numbered sections :sectnums: :sectnumlevels: 2
// Links :cc-by-nc-sa: http://creativecommons.org/licenses/by-nc-sa/4.0/
//************* END OF SETTINGS ****************** //************************************************
// Header ++++
BirdNET-Analyzer
Automated scientific audio data processing and bird ID.
++++// Badges :license-badge: https://badgen.net/badge/License/CC-BY-NC-SA%204.0/green :os-badge: https://badgen.net/badge/OS/Linux%2C%20Windows%2C%20macOS/blue :species-badge: https://badgen.net/badge/Species/6512/blue :downloads-badge: https://www-user.tu-chemnitz.de/~johau/birdnet_total_downloads_badge.php :twitter-badge: https://img.shields.io/twitter/follow/BirdNET_App :reddit-badge: https://img.shields.io/reddit/subreddit-subscribers/BirdNET_Analyzer?style=social // Mail icon from FontAwesome :mail-badge: https://img.shields.io/badge/Mail us!-ccb--birdnet%40cornell.edu-yellow.svg?style=social&logo=data:image/svg%2bxml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHZpZXdCb3g9IjAgMCA1MTIgNTEyIj48IS0tISBGb250IEF3ZXNvbWUgUHJvIDYuNC4wIGJ5IEBmb250YXdlc29tZSAtIGh0dHBzOi8vZm9udGF3ZXNvbWUuY29tIExpY2Vuc2UgLSBodHRwczovL2ZvbnRhd2Vzb21lLmNvbS9saWNlbnNlIChDb21tZXJjaWFsIExpY2Vuc2UpIENvcHlyaWdodCAyMDIzIEZvbnRpY29ucywgSW5jLiAtLT48cGF0aCBkPSJNNjQgMTEyYy04LjggMC0xNiA3LjItMTYgMTZ2MjIuMUwyMjAuNSAyOTEuN2MyMC43IDE3IDUwLjQgMTcgNzEuMSAwTDQ2NCAxNTAuMVYxMjhjMC04LjgtNy4yLTE2LTE2LTE2SDY0ek00OCAyMTIuMlYzODRjMCA4LjggNy4yIDE2IDE2IDE2SDQ0OGM4LjggMCAxNi03LjIgMTYtMTZWMjEyLjJMMzIyIDMyOC44Yy0zOC40IDMxLjUtOTMuNyAzMS41LTEzMiAwTDQ4IDIxMi4yek0wIDEyOEMwIDkyLjcgMjguNyA2NCA2NCA2NEg0NDhjMzUuMyAwIDY0IDI4LjcgNjQgNjRWMzg0YzAgMzUuMy0yOC43IDY0LTY0IDY0SDY0Yy0zNS4zIDAtNjQtMjguNy02NC02NFYxMjh6Ii8+PC9zdmc+
image:{license-badge}[CC BY-NC-SA 4.0, link={cc-by-nc-sa}] image:{os-badge}[Supported OS, link=""] image:{species-badge}[Number of species, link=""] image:{downloads-badge}[Downloads, link=""]
[.text-center] image:{mail-badge}[Email, link=mailto:ccb-birdnet@cornell.edu, height=25] image:https://img.shields.io/twitter/follow/BirdNET_App[Twitter Follow, link=https://twitter.com/BirdNET_App, height=25] image:{reddit-badge}[Subreddit subscribers, link="https://reddit.com/r/BirdNET_Analyzer", height=25]
++++
[discrete] == Introduction
This repo contains BirdNET models and scripts for processing large amounts of audio data or single audio files. This is the most advanced version of BirdNET for acoustic analyses and we will keep this repository up-to-date with new models and improved interfaces to enable scientists with no CS background to run the analysis.
https://github.com/kahst/BirdNET-Analyzer/releases/download/v1.2.0/BirdNET-Analyzer-GUI-1.2.0-win.exe[*Click here to download the Windows installer*] and follow the https://github.com/kahst/BirdNET-Analyzer#setup-windows[setup instructions].
https://tuc.cloud/index.php/s/2TX59Qda2X92Ppr/download/BirdNET_GLOBAL_6K_V2.4_Model_Raven.zip[*Download the newest Raven model here*] and follow the https://github.com/kahst/BirdNET-Analyzer#setup-raven-pro[setup instructions].
Feel free to use BirdNET for your acoustic analyses and research. If you do, please cite as:
@article{kahl2021birdnet, title={BirdNET: A deep learning solution for avian diversity monitoring}, author={Kahl, Stefan and Wood, Connor M and Eibl, Maximilian and Klinck, Holger}, journal={Ecological Informatics}, volume={61}, pages={101236}, year={2021}, publisher={Elsevier} }
This work is licensed under a {cc-by-nc-sa}[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License].
[discrete] == About
Developed by the https://www.birds.cornell.edu/ccb/[K. Lisa Yang Center for Conservation Bioacoustics] at the https://www.birds.cornell.edu/home[Cornell Lab of Ornithology] in collaboration with https://www.tu-chemnitz.de/index.html.en[Chemnitz University of Technology].
Go to https://birdnet.cornell.edu to learn more about the project.
Want to use BirdNET to analyze a large dataset? Don't hesitate to contact us: ccb-birdnet@cornell.edu
Follow us on Twitter https://twitter.com/BirdNET_App[@BirdNET_App]
We also have a discussion forum on https://reddit.com/r/BirdNET_Analyzer[Reddit] if you have a general question or just want to chat.
Have a question, remark, or feature request? Please start a new issue thread to let us know. Feel free to submit a pull request.
[discrete] == Contents toc::[]
== Usage guide
This document provides instructions for downloading and installing the GUI, and conducting some of the most common types of analyses. Within the document, a link is provided to download example sound files that can be used for practice.
Download the PDF here: https://zenodo.org/records/8357176[BirdNET-Analyzer Usage Guide]
Watch our presentation on how to use BirdNET-Analyzer to train your own models: https://youtu.be/HuEZGIPeyq0[BirdNET - BioacousTalks at YouTube]
== Showroom
BirdNET powers a number of fantastic community projects dedicated to bird song identification, all of which use models from this repository. These are some highlights, make sure to check them out!
.Community projects
[cols=",", options="header"]
|===
| Project | Description
| image:https://tuc.cloud/index.php/s/cDqtQxo8yMRkNYP/download/logo_box_loggerhead.png[HaikuBox,300,link=https://haikubox.com] | HaikuBox + Once connected to your WiFi, Haikubox will listen for birds 24/7. When BirdNET finds a match between its thousands of labeled sounds and the birdsong in your yard, it identifies the bird species and shares a three-second audio clip to the Haikubox website and smartphone app.
Learn more at: https://haikubox.com[HaikuBox.com]
| image:https://tuc.cloud/index.php/s/WKCZoE9WSjimDoe/download/logo_box_birdnet-pi.png[BirdNET-PI,300,link=https://birdnetpi.com] | BirdNET-Pi + Built on the TFLite version of BirdNET, this project uses pre-built TFLite binaries for Raspberry Pi to run on-device sound analyses. It is able to recognize bird sounds from a USB sound card in realtime and share its data with the rest of the world.
Note: You can find the most up-to-date version of BirdNET-PI at https://github.com/Nachtzuster/BirdNET-Pi[github.com/Nachtzuster/BirdNET-Pi]
Learn more at: https://birdnetpi.com[BirdNETPi.com]
| image:https://tuc.cloud/index.php/s/jDtyG9W36WwKpbR/download/logo_box_birdweather.png[BirdWeather,300,link=https://app.birdweather.com] | BirdWeather + This site was built to be a living library of bird vocalizations. Using the BirdNET artificial neural network, BirdWeather is continuously listening to over 1,000 active stations around the world in real-time.
Learn more at: https://app.birdweather.com[BirdWeather.com]
| image:https://tuc.cloud/index.php/s/kqT7GXXzfDs3NyA/download/birdnetlib-logo.png[birdnetlib,300,link=https://joeweiss.github.io/birdnetlib/]
| birdnetlib +
A python api for BirdNET-Analyzer and BirdNET-Lite. birdnetlib
provides a common interface for BirdNET-Analyzer and BirdNET-Lite.
Learn more at: https://joeweiss.github.io/birdnetlib/[github.io/birdnetlib]
| image:https://tuc.cloud/index.php/s/zpNkXJq7je3BKNE/download/logo_box_ecopi_bird.png[ecoPI:Bird,300,link=https://oekofor.netlify.app/en/portfolio/ecopi-bird_en/] | ecoPi:Bird + The ecoPi:Bird is a device for automated acoustic recordings of bird songs and calls, with a self-sufficient power supply. It facilitates economical long-term monitoring, implemented with minimal personal requirements.
Learn more at: https://oekofor.netlify.app/en/portfolio/ecopi-bird_en/[oekofor.netlify.app]
| image:https://tuc.cloud/index.php/s/HQiPxG2rKbmDb64/download/dawn_chorus_logo.png[DawnChorus,300,link=https://dawn-chorus.org/en/] | Dawn Chorus + Dawn Chorus invites global participation to record bird sounds for biodiversity research, art, and raising awareness. This project aims to sharpen our senses and creativity by connecting us more deeply with the wonders of nature.
Learn more at: https://dawn-chorus.org/en/[dawn-chorus.org]
| image:https://tuc.cloud/index.php/s/M27nZ4LmNaNEKMg/download/chirpity_logo.png[Chirpity,300,link=https://chirpity.mattkirkland.co.uk] | Chirpity + Discover the wonders of bird identification with Chirpity, a desktop application powered by cutting-edge Machine Learning. With the option to choose between BirdNET or the native Chirpity model, finely tuned for Nocturnal Flight Calls, you have the flexibility to tailor your analysis to your specific needs. Perfect for enthusiasts and researchers alike, Chirpity is particularly well-suited for Nocmig and other extensive field recordings. Chirpity is available on both Windows and Mac platforms.
Learn more at: https://chirpity.mattkirkland.co.uk[chirpity.mattkirkland.co.uk]
| image:https://raw.githubusercontent.com/tphakala/birdnet-go/main/doc/BirdNET-Go-logo.webp[Go-BirdNET,300,link=https://github.com/tphakala/go-birdnet] | Go-BirdNET + Go-BirdNET is an application inspired by BirdNET-Analyzer. While the original BirdNET is based on Python, Go-BirdNET is built using Golang, aiming for simplified deployment across multiple platforms, from Windows PCs to single board computers like Raspberry Pi.
Learn more at: https://github.com/tphakala/go-birdnet[github.com/tphakala/go-birdnet]
| image:https://github.com/woheller69/whoBIRD/blob/master/fastlane/metadata/android/en-US/images/icon.png[whoBIRD,300,link=https://github.com/woheller69/whoBIRD] | whoBIRD + whoBIRD empowers you to identify birds anywhere, anytime, without an internet connection. Built upon the TFLite version of BirdNET, this Android application harnesses the power of machine learning to recognize birds directly on your device.
Learn more at: https://github.com/woheller69/whoBIRD[whoBIRD]
| image:https://github.com/ssciwr/faunanet/blob/master/faunanet_logo.png[faunanet,300,link=https://github.com/ssciwr/faunanet] | faunanet + faunanet provides a platform for bioacoustics research projects and is an extension of Birdnet-Analyzer based on birdnetlib. faunanet is written in pure Python and is developed by the Scientific Software Center at the University of Heidelberg, Germany.
Learn more at: https://github.com/ssciwr/faunanet[faunanet] |===
Other cool projects:
- BirdCAGE is an application for monitoring the bird songs in audio streams: https://github.com/mmcc-xx/BirdCAGE[BirdCAGE at GitHub]
- BattyBirdNET-Analyzer is a tool to assist in the automated classification of bat calls: https://github.com/rdz-oss/BattyBirdNET-Analyzer[BattyBirdNET-Analyzer at GitHub]
Working on a cool project that uses BirdNET? Let us know and we can feature your project here.
== Projects map
We have created an interactive map of projects that use BirdNET. If you are working on a project that uses BirdNET, please let us know https://github.com/kahst/BirdNET-Analyzer/issues/221[here] and we can add it to the map.
You can access the map here: https://kahst.github.io/BirdNET-Analyzer/projects.html[Open projects map]
== Model version update
[discrete] ==== V2.4, June 2023
- more than 6,000 species worldwide
- covers frequencies from 0 Hz to 15 kHz with two-channel spectrogram (one for low and one for high frequencies)
- 0.826 GFLOPs, 50.5 MB as FP32
- enhanced and optimized metadata model
- global selection of species (birds and non-birds) with 6,522 classes (incl. 10 non-event classes)
You can find a list of previous versions here: https://github.com/kahst/BirdNET-Analyzer/tree/main/checkpoints[BirdNET-Analyzer Model Version History]
[discrete] ==== Species range model V2.4 - V2, Jan 2024
- updated species range model based on eBird data
- more accurate (spatial) species range prediction
- slightly increased long-tail distribution in the temporal resolution
- see https://github.com/kahst/BirdNET-Analyzer/discussions/234[this discussion post] for more details
== Technical Details
Model V2.4 uses the following settings:
- 48 kHz sampling rate (we up- and downsample automatically and can deal with artifacts from lower sampling rates)
- we compute 2 mel spectrograms as input for the convolutional neural network: ** first one has fmin = 0 Hz and fmax = 3000; nfft = 2048; hop size = 278; 96 mel bins ** second one has fmin = 500 Hz and fmax = 15 kHz; nfft = 1024; hop size = 280; 96 mel bins
- both spectrograms have a final resolution of 96x511 pixels
- raw audio will be normalized between -1 and 1 before spectrogram conversion
- we use non-linear magnitude scaling as mentioned in http://ceur-ws.org/Vol-2125/paper_181.pdf[Schlüter 2018]
- V2.4 uses an EfficienNetB0-like backbone with a final embedding size of 1024
- See https://github.com/kahst/BirdNET-Analyzer/issues/177#issuecomment-1772538736[this comment] for more details
== Setup === Setup (Raven Pro)
If you want to analyze audio files without any additional coding or package install, you can now use https://ravensoundsoftware.com/software/raven-pro/[Raven Pro software] to run BirdNET models. After download, BirdNET is available through the new "Learning detector" feature in Raven Pro. For more information on how to use this feature, please visit the https://ravensoundsoftware.com/article-categories/learning-detector/[Raven Pro Knowledge