A list of demo websites for automatic music generation research
text-to-music/audio
- control-transfer-diffusion (diffusion; demerlé24ismir): https://nilsdem.github.io/control-transfer-diffusion/
- AP-adapter (diffusion; tsai24arxiv): https://rebrand.ly/AP-adapter
- MusiConGen (transformer; lan24arxiv): https://musicongen.github.io/musicongen_demo/
- Stable audio Open (diffusion; evans24arxiv): https://stability-ai.github.io/stable-audio-open-demo/
- MEDIC (diffusion; liu24arxiv): https://medic-zero.github.io/
- MusicGenStyle (transformer; rouard24ismir): https://musicgenstyle.github.io/
- MelodyFlow (transformer+diffusion; lelan24arxiv): https://melodyflow.github.io/
- MelodyLM (transformer+diffusion; li24arxiv): https://melodylm666.github.io/
- JASCO (flow; tal24arxiv): https://pages.cs.huji.ac.il/adiyoss-lab/JASCO/
- MusicFlow (diffusion; prajwal24icml): N/A
- Diff-A-Riff (diffusion; nistal24arxiv): https://sonycslparis.github.io/diffariff-companion/
- DITTO-2 (diffusion; novack24arxiv): https://ditto-music.github.io/ditto2/
- SoundCTM (diffusion; saito24arxiv): N/A
- Instruct-MusicGen (transformer; zhang24arxiv): https://foul-ice-5ea.notion.site/Instruct-MusicGen-Demo-Page-Under-construction-a1e7d8d474f74df18bda9539d96687ab
- Stable audio 2 (diffusion; evans24arxiv): https://stability-ai.github.io/stable-audio-2-demo/
- Melodist (transformer; hong24arxiv): https://text2songmelodist.github.io/Sample/
- SMITIN (transformer; koo24arxiv): https://wide-wood-512.notion.site/SMITIN-Self-Monitored-Inference-Time-INtervention-for-Generative-Music-Transformers-Demo-Page-983723e6e9ac4f008298f3c427a23241
- Stable audio (diffusion; evans24arxiv): https://stability-ai.github.io/stable-audio-demo/
- MusicMagus (diffusion; zhang24ijcai): https://wry-neighbor-173.notion.site/MusicMagus-Zero-Shot-Text-to-Music-Editing-via-Diffusion-Models-8f55a82f34944eb9a4028ca56c546d9d
- DITTO (diffusion; novack24arxiv): https://ditto-music.github.io/web/
- MAGNeT (transformer; ziv24arxiv): https://pages.cs.huji.ac.il/adiyoss-lab/MAGNeT/
- Mustango (diffusion; melechovsky24naacl): https://github.com/AMAAI-Lab/mustango
- Music ControlNet (diffusion; wu24taslp): https://musiccontrolnet.github.io/web/
- InstrumentGen (transformer; nercessian23ml4audio): https://instrumentgen.netlify.app/
- Coco-Mulla (transformer; lin23arxiv): https://kikyo-16.github.io/coco-mulla/
- JEN-1 Composer (diffusion; yao23arxiv): https://www.jenmusic.ai/audio-demos
- UniAudio (transformer; yang23arxiv): http://dongchaoyang.top/UniAudio_demo/
- MusicLDM (diffusion; chen23arxiv): https://musicldm.github.io/
- InstructME (diffusion; han23arxiv): https://musicedit.github.io/
- JEN-1 (diffusion; li23arxiv): https://www.futureverse.com/research/jen/demos/jen1
- MusicGen (Transformer; copet23arxiv): https://ai.honu.io/papers/musicgen/
- MuseCoco (Transformer; lu23arxiv): https://ai-muzic.github.io/musecoco/ (for symbolic music)
- MeLoDy (Transformer+diffusion; lam23arxiv): https://efficient-melody.github.io/
- MusicLM (Transformer; agostinelli23arxiv): https://google-research.github.io/seanet/musiclm/examples/
- Noise2Music (diffusion; huang23arxiv): https://noise2music.github.io/
- ERNIE-Music (diffusion; zhu23arxiv): N/A
- Riffusion (diffusion;): https://www.riffusion.com/
text-to-audio
- PicoAudio (diffusion; xie24arxiv): https://zeyuxie29.github.io/PicoAudio.github.io/
- AudioLCM (diffusion; liu24arxiv): https://audiolcm.github.io/
- UniAudio 1.5 (transformer; yang24arxiv): https://github.com/yangdongchao/LLM-Codec
- Tango 2 (diffusion; majumder24mm): https://tango2-web.github.io/
- Baton (diffusion; liao24arxiv): https://baton2024.github.io/
- T-FOLEY (diffusion; chung24icassp): https://yoonjinxd.github.io/Event-guided_FSS_Demo.github.io/
- Audiobox (diffusion; vyas23arxiv): https://audiobox.metademolab.com/
- Amphion (zhang23arxiv): https://github.com/open-mmlab/Amphion
- VoiceLDM (diffusion; lee23arxiv): https://voiceldm.github.io/
- AudioLDM 2 (diffusion; liu23arxiv): https://audioldm.github.io/audioldm2/
- WavJourney (; liu23arxiv): https://audio-agi.github.io/WavJourney_demopage/
- CLIPSynth (diffusion; dong23cvprw): https://salu133445.github.io/clipsynth/
- CLIPSonic (diffusion; dong23waspaa): https://salu133445.github.io/clipsonic/
- SoundStorm (Transformer; borsos23arxiv): https://google-research.github.io/seanet/soundstorm/examples/
- AUDIT (diffusion; wang23arxiv): https://audit-demo.github.io/
- VALL-E (Transformer; wang23arxiv): https://www.microsoft.com/en-us/research/project/vall-e/ (for speech)
- multi-source-diffusion-models (diffusion; 23arxiv): https://gladia-research-group.github.io/multi-source-diffusion-models/
- Make-An-Audio (diffusion; huang23arxiv): https://text-to-audio.github.io/ (for general sounds)
- AudioLDM (diffusion; liu23arxiv): https://audioldm.github.io/ (for general sounds)
- AudioGen (Transformer; kreuk23iclr): https://felixkreuk.github.io/audiogen/ (for general sounds)
- AudioLM (Transformer; borsos23taslp): https://google-research.github.io/seanet/audiolm/examples/ (for general sounds)
audio-domain music generation
- VampNet (transformer; garcia23ismir): https://hugo-does-things.notion.site/VampNet-Music-Generation-via-Masked-Acoustic-Token-Modeling-e37aabd0d5f1493aa42c5711d0764b33
- fast JukeBox (jukebox+knowledge distilling; pezzat-morales23mdpi): https://soundcloud.com/michel-pezzat-615988723
- DAG (diffusion; pascual23icassp): https://diffusionaudiosynthesis.github.io/
- musika! (GAN; pasini22ismir): https://huggingface.co/spaces/marcop/musika
- JukeNox (VQVAE+Transformer; dhariwal20arxiv): https://openai.com/blog/jukebox/
- UNAGAN (GAN; liu20arxiv): https://github.com/ciaua/unagan
- dadabots (sampleRNN; carr18mume): http://dadabots.com/music.php
given singing, generate accompaniments
- FastSAG (diffusion; chen24arxiv): https://fastsag.github.io/
- SingSong (VQVAE+Transofmrer; donahue23arxiv): https://storage.googleapis.com/sing-song/index.html
given drumless audio, generate drum accompaniments
- JukeDrummer (VQVAE+Transofmrer; wu22ismir): https://legoodmanner.github.io/jukedrummer-demo/
audio-domain singing synthesis
- Prompt-Singer (transformer; wang24naacl): https://prompt-singer.github.io/
- StyleSinger (diffusion; zhang24aaai): https://stylesinger.github.io/
- BiSinger (transformer; zhou23asru): https://bisinger-svs.github.io/
- HiddenSinger (diffusion; hwang23arxiv): https://jisang93.github.io/hiddensinger-demo/
- Make-A-Voice (transformer; huang23arxiv): https://make-a-voice.github.io/
- RMSSinger (diffusion; he23aclf): https://rmssinger.github.io/
- NaturalSpeech 2 (diffusion; shen23arxiv): https://speechresearch.github.io/naturalspeech2/
- NANSY++ (Transformer; choi23iclr): https://bald-lifeboat-9af.notion.site/Demo-Page-For-NANSY-67d92406f62b4630906282117c7f0c39
- UniSyn (; lei23aaai): https://leiyi420.github.io/UniSyn/
- VISinger 2 (zhang22arxiv): https://zhangyongmao.github.io/VISinger2/
- xiaoicesing 2 (Transformer+GAN; wang22arxiv): https://wavelandspeech.github.io/xiaoice2/
- WeSinger 2 (Transformer+GAN; zhang22arxiv): https://zzw922cn.github.io/wesinger2/
- U-Singer (Transformer; kim22arxiv): https://u-singer.github.io/
- Singing-Tacotron (Transformer; wang22arxiv): https://hairuo55.github.io/SingingTacotron/
- KaraSinger (GRU/Transformer; liao22icassp): https://jerrygood0703.github.io/KaraSinger/
- VISinger (flow; zhang2): https://zhangyongmao.github.io/VISinger/
- MLP singer (mixer blocks; tae21arxiv): https://github.com/neosapience/mlp-singer
- LiteSing (wavenet; zhuang21icassp): https://auzxb.github.io/LiteSing/
- DiffSinger (diffusion; liu22aaai)[no duration modeling]: https://diffsinger.github.io/
- HiFiSinger (Transformer; chen20arxiv): https://speechresearch.github.io/hifisinger/
- DeepSinger (Transformer; ren20kdd):