Project Icon

AutoSub

开源视频自动字幕生成工具

AutoSub是一款开源命令行工具,能够为视频自动生成多种格式的字幕文件。它集成了Mozilla DeepSpeech和Coqui STT语音识别技术,结合pyAudioAnalysis音频分析库,实现了高效的音频分割和文字转换。该工具支持多语言处理,可满足不同类型视频的字幕需求。

AutoSub

About

AutoSub is a CLI application to generate subtitle files (.srt, .vtt, and .txt transcript) for any video file using either Mozilla DeepSpeech or Coqui STT. I use their open-source models to run inference on audio segments and pyAudioAnalysis to split the initial audio on silent segments, producing multiple smaller files (makes inference easy).

⭐ Featured in DeepSpeech Examples by Mozilla

Installation

  • Clone the repo
    $ git clone https://github.com/abhirooptalasila/AutoSub
    $ cd AutoSub
    
  • [OPTIONAL] Create a virtual environment to install the required packages. By default, AutoSub will be installed globally. All further steps should be performed while in the AutoSub/ directory
    $ python3 -m pip install --user virtualenv
    $ virtualenv -p python3 sub
    $ source sub/bin/activate
    
  • Use the corresponding requirements file depending on whether you have a GPU or not. If you want to install for a GPU, replace requirements.txt with requirements-gpu.txt. Make sure you have the appropriate CUDA version
    $ pip install .
    
  • Install FFMPEG. If you're on Ubuntu, this should work fine
    $ sudo apt-get install ffmpeg
    $ ffmpeg -version               # I'm running 4.1.4
    
  • By default, if no model files are found in the root directory, the script will download v0.9.3 models for DeepSpeech or TFLITE model and Huge Vocab for Coqui. Use getmodels.sh to download DeepSpeech model and scorer files with the version number as argument. For Coqui, download from here
    $ ./getmodels.sh 0.9.3
    
  • For .tflite models with DeepSpeech, follow this

Docker

  • If you don't have the model files, get them
    $ ./getmodels.sh 0.9.3
    
  • For a CPU build
    $ docker build -t autosub .
    $ docker run --volume=`pwd`/input:/input --name autosub autosub --file /input/video.mp4
    $ docker cp autosub:/output/ .
    
  • For a GPU build that is reusable (saving time on instantiating the program)
    $ docker build --build-arg BASEIMAGE=nvidia/cuda:10.1-cudnn7-runtime-ubuntu18.04 --build-arg DEPSLIST=requirements-gpu.txt -t autosub-base . && \
    docker run --gpus all --name autosub-base autosub-base --dry-run || \
    docker commit --change 'CMD []' autosub-base autosub-instance
    
  • Finally
    $ docker run --volume=`pwd`/input:/input --name autosub autosub-instance --file ~/video.mp4
    $ docker cp autosub:/output/ .
    

How-to example

  • The model files should be in the repo root directory and will be loaded/downloaded automatically. Incase you have multiple versions, use the --model and --scorer args while executing
  • By default, Coqui is used for inference. You can change this by using the --engine argument with value "ds" for DeepSpeech
  • For languages other than English, you'll need to manually download the model and scorer files. Check here for DeepSpeech and here for Coqui.
  • After following the installation instructions, you can run autosub/main.py as given below. The --file argument is the video file for which subtitles are to be generated
    $ python3 autosub/main.py --file ~/movie.mp4
    
  • After the script finishes, the SRT file is saved in output/
  • The optional --split-duration argument allows customization of the maximum number of seconds any given subtitle is displayed for. The default is 5 seconds
    $ python3 autosub/main.py --file ~/movie.mp4 --split-duration 8
    
  • By default, AutoSub outputs SRT, VTT and TXT files. To only produce the file formats you want, use the --format argument
    $ python3 autosub/main.py --file ~/movie.mp4 --format srt txt
    
  • Open the video file and add this SRT file as a subtitle. You can just drag and drop in VLC.

How it works

Mozilla DeepSpeech is an open-source speech-to-text engine with support for fine-tuning using custom datasets, external language models, exporting memory-mapped models and a lot more. You should definitely check it out for STT tasks. So, when you run the script, I use FFMPEG to extract the audio from the video and save it in audio/. By default DeepSpeech is configured to accept 16kHz audio samples for inference, hence while extracting I make FFMPEG use 16kHz sampling rate.

Then, I use pyAudioAnalysis for silence removal - which basically takes the large audio file initially extracted, and splits it wherever silent regions are encountered, resulting in smaller audio segments which are much easier to process. I haven't used the whole library, instead I've integrated parts of it in autosub/featureExtraction.py and autosub/trainAudio.py. All these audio files are stored in audio/. Then for each audio segment, I perform DeepSpeech inference on it, and write the inferred text in a SRT file. After all files are processed, the final SRT file is stored in output/.

When I tested the script on my laptop, it took about 40 minutes to generate the SRT file for a 70 minutes video file. My config is an i5 dual-core @ 2.5 Ghz and 8GB RAM. Ideally, the whole process shouldn't take more than 60% of the duration of original video file.

Motivation

In the age of OTT platforms, there are still some who prefer to download movies/videos from YouTube/Facebook or even torrents rather than stream. I am one of them and on one such occasion, I couldn't find the subtitle file for a particular movie I had downloaded. Then the idea for AutoSub struck me and since I had worked with DeepSpeech previously, I decided to use it.

Contributing

I would love to follow up on any suggestions/issues you find :)

References

  1. https://github.com/mozilla/DeepSpeech/
  2. https://github.com/tyiannak/pyAudioAnalysis
  3. https://deepspeech.readthedocs.io/
项目侧边栏1项目侧边栏2
推荐项目
Project Cover

豆包MarsCode

豆包 MarsCode 是一款革命性的编程助手,通过AI技术提供代码补全、单测生成、代码解释和智能问答等功能,支持100+编程语言,与主流编辑器无缝集成,显著提升开发效率和代码质量。

Project Cover

AI写歌

Suno AI是一个革命性的AI音乐创作平台,能在短短30秒内帮助用户创作出一首完整的歌曲。无论是寻找创作灵感还是需要快速制作音乐,Suno AI都是音乐爱好者和专业人士的理想选择。

Project Cover

白日梦AI

白日梦AI提供专注于AI视频生成的多样化功能,包括文生视频、动态画面和形象生成等,帮助用户快速上手,创造专业级内容。

Project Cover

Kimi

Kimi AI助手提供多语言对话支持,能够阅读和理解用户上传的文件内容,解析网页信息,并结合搜索结果为用户提供详尽的答案。无论是日常咨询还是专业问题,Kimi都能以友好、专业的方式提供帮助。

Project Cover

有言AI

有言平台提供一站式AIGC视频创作解决方案,通过智能技术简化视频制作流程。无论是企业宣传还是个人分享,有言都能帮助用户快速、轻松地制作出专业级别的视频内容。

Project Cover

讯飞绘镜

讯飞绘镜是一个支持从创意到完整视频创作的智能平台,用户可以快速生成视频素材并创作独特的音乐视频和故事。平台提供多样化的主题和精选作品,帮助用户探索创意灵感。

Project Cover

讯飞文书

讯飞文书依托讯飞星火大模型,为文书写作者提供从素材筹备到稿件撰写及审稿的全程支持。通过录音智记和以稿写稿等功能,满足事务性工作的高频需求,帮助撰稿人节省精力,提高效率,优化工作与生活。

Project Cover

阿里绘蛙

绘蛙是阿里巴巴集团推出的革命性AI电商营销平台。利用尖端人工智能技术,为商家提供一键生成商品图和营销文案的服务,显著提升内容创作效率和营销效果。适用于淘宝、天猫等电商平台,让商品第一时间被种草。

Project Cover

SubCat字幕猫

SubCat字幕猫APP是一款创新的视频播放器,它将改变您观看视频的方式!SubCat结合了先进的人工智能技术,为您提供即时视频字幕翻译,无论是本地视频还是网络流媒体,让您轻松享受各种语言的内容。

投诉举报邮箱: service@vectorlightyear.com
@2024 懂AI·鲁ICP备2024100362号-6·鲁公网安备37021002001498号