Project Icon

speech_recognition

Python多引擎语音识别库

SpeechRecognition是一个Python语音识别库,支持CMU Sphinx、Google Speech等多个引擎。它提供麦克风输入、音频文件转录等功能,可进行离线和在线识别。该库安装简单,适用于各类语音识别应用开发。

SpeechRecognition

.. image:: https://img.shields.io/pypi/v/SpeechRecognition.svg :target: https://pypi.python.org/pypi/SpeechRecognition/ :alt: Latest Version

.. image:: https://img.shields.io/pypi/status/SpeechRecognition.svg :target: https://pypi.python.org/pypi/SpeechRecognition/ :alt: Development Status

.. image:: https://img.shields.io/pypi/pyversions/SpeechRecognition.svg :target: https://pypi.python.org/pypi/SpeechRecognition/ :alt: Supported Python Versions

.. image:: https://img.shields.io/pypi/l/SpeechRecognition.svg :target: https://pypi.python.org/pypi/SpeechRecognition/ :alt: License

.. image:: https://api.travis-ci.org/Uberi/speech_recognition.svg?branch=master :target: https://travis-ci.org/Uberi/speech_recognition :alt: Continuous Integration Test Results

Library for performing speech recognition, with support for several engines and APIs, online and offline.

UPDATE 2022-02-09: Hey everyone! This project started as a tech demo, but these days it needs more time than I have to keep up with all the PRs and issues. Therefore, I'd like to put out an open invite for collaborators - just reach out at me@anthonyz.ca if you're interested!

Speech recognition engine/API support:

  • CMU Sphinx <http://cmusphinx.sourceforge.net/wiki/>__ (works offline)
  • Google Speech Recognition
  • Google Cloud Speech API <https://cloud.google.com/speech/>__
  • Wit.ai <https://wit.ai/>__
  • Microsoft Azure Speech <https://azure.microsoft.com/en-us/services/cognitive-services/speech/>__
  • Microsoft Bing Voice Recognition (Deprecated) <https://www.microsoft.com/cognitive-services/en-us/speech-api>__
  • Houndify API <https://houndify.com/>__
  • IBM Speech to Text <http://www.ibm.com/smarterplanet/us/en/ibmwatson/developercloud/speech-to-text.html>__
  • Snowboy Hotword Detection <https://snowboy.kitt.ai/>__ (works offline)
  • Tensorflow <https://www.tensorflow.org/>__
  • Vosk API <https://github.com/alphacep/vosk-api/>__ (works offline)
  • OpenAI whisper <https://github.com/openai/whisper>__ (works offline)
  • Whisper API <https://platform.openai.com/docs/guides/speech-to-text>__

Quickstart: pip install SpeechRecognition. See the "Installing" section for more details.

To quickly try it out, run python -m speech_recognition after installing.

Project links:

  • PyPI <https://pypi.python.org/pypi/SpeechRecognition/>__
  • Source code <https://github.com/Uberi/speech_recognition>__
  • Issue tracker <https://github.com/Uberi/speech_recognition/issues>__

Library Reference

The library reference <https://github.com/Uberi/speech_recognition/blob/master/reference/library-reference.rst>__ documents every publicly accessible object in the library. This document is also included under reference/library-reference.rst.

See Notes on using PocketSphinx <https://github.com/Uberi/speech_recognition/blob/master/reference/pocketsphinx.rst>__ for information about installing languages, compiling PocketSphinx, and building language packs from online resources. This document is also included under reference/pocketsphinx.rst.

You have to install Vosk models for using Vosk. Here <https://alphacephei.com/vosk/models>__ are models avaiable. You have to place them in models folder of your project, like "your-project-folder/models/your-vosk-model"

Examples

See the examples/ directory <https://github.com/Uberi/speech_recognition/tree/master/examples>__ in the repository root for usage examples:

  • Recognize speech input from the microphone <https://github.com/Uberi/speech_recognition/blob/master/examples/microphone_recognition.py>__
  • Transcribe an audio file <https://github.com/Uberi/speech_recognition/blob/master/examples/audio_transcribe.py>__
  • Save audio data to an audio file <https://github.com/Uberi/speech_recognition/blob/master/examples/write_audio.py>__
  • Show extended recognition results <https://github.com/Uberi/speech_recognition/blob/master/examples/extended_results.py>__
  • Calibrate the recognizer energy threshold for ambient noise levels <https://github.com/Uberi/speech_recognition/blob/master/examples/calibrate_energy_threshold.py>__ (see recognizer_instance.energy_threshold for details)
  • Listening to a microphone in the background <https://github.com/Uberi/speech_recognition/blob/master/examples/background_listening.py>__
  • Various other useful recognizer features <https://github.com/Uberi/speech_recognition/blob/master/examples/special_recognizer_features.py>__

Installing

First, make sure you have all the requirements listed in the "Requirements" section.

The easiest way to install this is using pip install SpeechRecognition.

Otherwise, download the source distribution from PyPI <https://pypi.python.org/pypi/SpeechRecognition/>__, and extract the archive.

In the folder, run python setup.py install.

Requirements

To use all of the functionality of the library, you should have:

  • Python 3.8+ (required)
  • PyAudio 0.2.11+ (required only if you need to use microphone input, Microphone)
  • PocketSphinx (required only if you need to use the Sphinx recognizer, recognizer_instance.recognize_sphinx)
  • Google API Client Library for Python (required only if you need to use the Google Cloud Speech API, recognizer_instance.recognize_google_cloud)
  • FLAC encoder (required only if the system is not x86-based Windows/Linux/OS X)
  • Vosk (required only if you need to use Vosk API speech recognition recognizer_instance.recognize_vosk)
  • Whisper (required only if you need to use Whisper recognizer_instance.recognize_whisper)
  • openai (required only if you need to use Whisper API speech recognition recognizer_instance.recognize_whisper_api)

The following requirements are optional, but can improve or extend functionality in some situations:

  • If using CMU Sphinx, you may want to install additional language packs <https://github.com/Uberi/speech_recognition/blob/master/reference/pocketsphinx.rst#installing-other-languages>__ to support languages like International French or Mandarin Chinese.

The following sections go over the details of each requirement.

Python


The first software requirement is `Python 3.8+ <https://www.python.org/downloads/>`__. This is required to use the library.

PyAudio (for microphone users)

PyAudio <http://people.csail.mit.edu/hubert/pyaudio/#downloads>__ is required if and only if you want to use microphone input (Microphone). PyAudio version 0.2.11+ is required, as earlier versions have known memory management bugs when recording from microphones in certain situations.

If not installed, everything in the library will still work, except attempting to instantiate a Microphone object will raise an AttributeError.

The installation instructions on the PyAudio website are quite good - for convenience, they are summarized below:

  • On Windows, install PyAudio using Pip <https://pip.readthedocs.org/>__: execute pip install pyaudio in a terminal.
  • On Debian-derived Linux distributions (like Ubuntu and Mint), install PyAudio using APT <https://wiki.debian.org/Apt>__: execute sudo apt-get install python-pyaudio python3-pyaudio in a terminal.
    • If the version in the repositories is too old, install the latest release using Pip: execute sudo apt-get install portaudio19-dev python-all-dev python3-all-dev && sudo pip install pyaudio (replace pip with pip3 if using Python 3).
  • On OS X, install PortAudio using Homebrew <http://brew.sh/>: brew install portaudio. Then, install PyAudio using Pip <https://pip.readthedocs.org/>: pip install pyaudio.
  • On other POSIX-based systems, install the portaudio19-dev and python-all-dev (or python3-all-dev if using Python 3) packages (or their closest equivalents) using a package manager of your choice, and then install PyAudio using Pip <https://pip.readthedocs.org/>__: pip install pyaudio (replace pip with pip3 if using Python 3).

PyAudio wheel packages <https://pypi.python.org/pypi/wheel>__ for common 64-bit Python versions on Windows and Linux are included for convenience, under the third-party/ directory <https://github.com/Uberi/speech_recognition/tree/master/third-party>__ in the repository root. To install, simply run pip install wheel followed by pip install ./third-party/WHEEL_FILENAME (replace pip with pip3 if using Python 3) in the repository root directory <https://github.com/Uberi/speech_recognition>__.

PocketSphinx-Python (for Sphinx users)


`PocketSphinx-Python <https://github.com/bambocher/pocketsphinx-python>`__ is **required if and only if you want to use the Sphinx recognizer** (``recognizer_instance.recognize_sphinx``).

PocketSphinx-Python `wheel packages <https://pypi.python.org/pypi/wheel>`__ for 64-bit Python 3.4, and 3.5 on Windows are included for convenience, under the ``third-party/`` `directory <https://github.com/Uberi/speech_recognition/tree/master/third-party>`__. To install, simply run ``pip install wheel`` followed by ``pip install ./third-party/WHEEL_FILENAME`` (replace ``pip`` with ``pip3`` if using Python 3) in the SpeechRecognition folder.

On Linux and other POSIX systems (such as OS X), follow the instructions under "Building PocketSphinx-Python from source" in `Notes on using PocketSphinx <https://github.com/Uberi/speech_recognition/blob/master/reference/pocketsphinx.rst>`__ for installation instructions.

Note that the versions available in most package repositories are outdated and will not work with the bundled language data. Using the bundled wheel packages or building from source is recommended.

See `Notes on using PocketSphinx <https://github.com/Uberi/speech_recognition/blob/master/reference/pocketsphinx.rst>`__ for information about installing languages, compiling PocketSphinx, and building language packs from online resources. This document is also included under ``reference/pocketsphinx.rst``.

Vosk (for Vosk users)
~~~~~~~~~~~~~~~~~~~~~
Vosk API is **required if and only if you want to use Vosk recognizer** (``recognizer_instance.recognize_vosk``).

You can install it with ``python3 -m pip install vosk``.

You also have to install Vosk Models:

`Here <https://alphacephei.com/vosk/models>`__ are models avaiable for download. You have to place them in models folder of your project, like "your-project-folder/models/your-vosk-model"

Google Cloud Speech Library for Python (for Google Cloud Speech API users)

Google Cloud Speech library for Python <https://cloud.google.com/speech-to-text/docs/quickstart>__ is required if and only if you want to use the Google Cloud Speech API (recognizer_instance.recognize_google_cloud).

If not installed, everything in the library will still work, except calling recognizer_instance.recognize_google_cloud will raise an RequestError.

According to the official installation instructions <https://cloud.google.com/speech-to-text/docs/quickstart>, the recommended way to install this is using Pip <https://pip.readthedocs.org/>: execute pip install google-cloud-speech (replace pip with pip3 if using Python 3).

FLAC (for some systems)


A `FLAC encoder <https://xiph.org/flac/>`__ is required to encode the audio data to send to the API. If using Windows (x86 or x86-64), OS X (Intel Macs only, OS X 10.6 or higher), or Linux (x86 or x86-64), this is **already bundled with this library - you do not need to install anything**.

Otherwise, ensure that you have the ``flac`` command line tool, which is often available through the system package manager. For example, this would usually be ``sudo apt-get install flac`` on Debian-derivatives, or ``brew install flac`` on OS X with Homebrew.

Whisper (for Whisper users)

Whisper is required if and only if you want to use whisper (recognizer_instance.recognize_whisper).

You can install it with python3 -m pip install SpeechRecognition[whisper-local].

Whisper API (for Whisper API users)


The library `openai <https://pypi.org/project/openai/>`__ is **required if and only if you want to use Whisper API** (``recognizer_instance.recognize_whisper_api``).

If not installed, everything in the library will still work, except calling ``recognizer_instance.recognize_whisper_api`` will raise an ``RequestError``.

You can install it with ``python3 -m pip install SpeechRecognition[whisper-api]``.

Troubleshooting
---------------

The recognizer tries to recognize speech even when I'm not speaking, or after I'm done speaking.

Try increasing the recognizer_instance.energy_threshold property. This is basically how sensitive the recognizer is to when recognition should start. Higher values mean that it will be less sensitive, which is useful if you are in a loud room.

This value depends entirely on your microphone or audio data. There is no one-size-fits-all value, but good values typically range from 50 to 4000.

Also, check on your microphone volume settings. If it is too sensitive, the microphone may be picking up a lot of ambient noise. If it is too insensitive, the microphone may be rejecting speech as just noise.

The recognizer can't recognize speech right after it starts listening for the first time.


The ``recognizer_instance.energy_threshold`` property is probably set to a value that is too high to start off with, and then being adjusted lower automatically by dynamic energy threshold adjustment. Before it is at a good level, the energy threshold is so high that speech is just considered ambient noise.

The solution is to decrease this threshold, or call ``recognizer_instance.adjust_for_ambient_noise`` beforehand, which will set the threshold to a good value automatically.

The recognizer doesn't understand my particular language/dialect.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Try setting the recognition language to your language/dialect. To do this, see the documentation for ``recognizer_instance.recognize_sphinx``, ``recognizer_instance.recognize_google``, ``recognizer_instance.recognize_wit``, ``recognizer_instance.recognize_bing``, ``recognizer_instance.recognize_api``, ``recognizer_instance.recognize_houndify``, and ``recognizer_instance.recognize_ibm``.

For example, if your language/dialect is British English, it is better to use ``"en-GB"`` as the language rather than ``"en-US"``.

The recognizer hangs on ``recognizer_instance.listen``; specifically, when it's calling ``Microphone.MicrophoneStream.read``.

This usually happens when you're using a Raspberry Pi board, which doesn't have audio input capabilities by itself. This causes the default microphone used by PyAudio to simply block when we try to read it. If you happen to be using a Raspberry Pi, you'll need a USB sound card (or USB microphone).

Once you do this, change all instances of Microphone() to Microphone(device_index=MICROPHONE_INDEX), where MICROPHONE_INDEX is the hardware-specific index of the microphone.

To figure out what the value of MICROPHONE_INDEX should be, run the following code:

.. code:: python

import speech_recognition as sr
for index, name in enumerate(sr.Microphone.list_microphone_names()):
    print("Microphone with name \"{1}\" found for `Microphone(device_index={0})`".format(index, name))

This will print out something like the following:

::

Microphone with name "HDA Intel HDMI: 0 (hw:0,3)" found for `Microphone(device_index=0)`
Microphone with name "HDA Intel HDMI: 1 (hw:0,7)" found for `Microphone(device_index=1)`
Microphone with name "HDA Intel HDMI: 2 (hw:0,8)" found for `Microphone(device_index=2)`
Microphone with name "Blue Snowball: USB Audio (hw:1,0)" found for `Microphone(device_index=3)`
Microphone with name "hdmi" found for `Microphone(device_index=4)`
Microphone with name "pulse" found for `Microphone(device_index=5)`
Microphone with name "default" found for `Microphone(device_index=6)`

Now, to use the Snowball microphone, you would change Microphone() to Microphone(device_index=3).

Calling Microphone() gives the error IOError: No Default Input Device Available.


As the error says, the program doesn't know which microphone to use.

To proceed, either use ``Microphone(device_index=MICROPHONE_INDEX, ...)`` instead of ``Microphone(...)``, or set a default microphone in your OS. You can obtain possible values of ``MICROPHONE_INDEX`` using the code in the troubleshooting entry right above this
项目侧边栏1项目侧边栏2
推荐项目
Project Cover

豆包MarsCode

豆包 MarsCode 是一款革命性的编程助手,通过AI技术提供代码补全、单测生成、代码解释和智能问答等功能,支持100+编程语言,与主流编辑器无缝集成,显著提升开发效率和代码质量。

Project Cover

AI写歌

Suno AI是一个革命性的AI音乐创作平台,能在短短30秒内帮助用户创作出一首完整的歌曲。无论是寻找创作灵感还是需要快速制作音乐,Suno AI都是音乐爱好者和专业人士的理想选择。

Project Cover

有言AI

有言平台提供一站式AIGC视频创作解决方案,通过智能技术简化视频制作流程。无论是企业宣传还是个人分享,有言都能帮助用户快速、轻松地制作出专业级别的视频内容。

Project Cover

Kimi

Kimi AI助手提供多语言对话支持,能够阅读和理解用户上传的文件内容,解析网页信息,并结合搜索结果为用户提供详尽的答案。无论是日常咨询还是专业问题,Kimi都能以友好、专业的方式提供帮助。

Project Cover

阿里绘蛙

绘蛙是阿里巴巴集团推出的革命性AI电商营销平台。利用尖端人工智能技术,为商家提供一键生成商品图和营销文案的服务,显著提升内容创作效率和营销效果。适用于淘宝、天猫等电商平台,让商品第一时间被种草。

Project Cover

吐司

探索Tensor.Art平台的独特AI模型,免费访问各种图像生成与AI训练工具,从Stable Diffusion等基础模型开始,轻松实现创新图像生成。体验前沿的AI技术,推动个人和企业的创新发展。

Project Cover

SubCat字幕猫

SubCat字幕猫APP是一款创新的视频播放器,它将改变您观看视频的方式!SubCat结合了先进的人工智能技术,为您提供即时视频字幕翻译,无论是本地视频还是网络流媒体,让您轻松享受各种语言的内容。

Project Cover

美间AI

美间AI创意设计平台,利用前沿AI技术,为设计师和营销人员提供一站式设计解决方案。从智能海报到3D效果图,再到文案生成,美间让创意设计更简单、更高效。

Project Cover

AIWritePaper论文写作

AIWritePaper论文写作是一站式AI论文写作辅助工具,简化了选题、文献检索至论文撰写的整个过程。通过简单设定,平台可快速生成高质量论文大纲和全文,配合图表、参考文献等一应俱全,同时提供开题报告和答辩PPT等增值服务,保障数据安全,有效提升写作效率和论文质量。

投诉举报邮箱: service@vectorlightyear.com
@2024 懂AI·鲁ICP备2024100362号-6·鲁公网安备37021002001498号