Project Icon

prom2teams

Prometheus告警与Microsoft Teams的智能集成工具

prom2teams是一款开源Python工具,用于将Prometheus Alertmanager的告警信息智能转发至Microsoft Teams。它支持告警分组、标签筛选和重试机制,可通过Docker或Helm Chart部署。该工具能无缝对接现有监控系统,有效提升运维团队的告警处理效率。

logo

Build Status Quality Gate Status Docker Build Status Docker Hub Pulls

prom2teams: Prometheus Alertmanager/Microsoft Teams integration

Alert example

prom2teams is a service built with Python that receives alert notifications from a previously configured Prometheus Alertmanager instance and forwards it to Microsoft Teams using defined connectors.

It presents grouping of alerts, labels/annotations exclusion and a Teams' alert retry policy among its key features.

Getting Started

Prerequisites

The application has been tested with Prometheus 2.2.1, Python 3.8.0 and pip 9.0.1.

Newer versions of Prometheus/Python/pip should work but could also present issues.

Installing

prom2teams is present on PyPI, so could be installed using pip3:

$ pip3 install prom2teams

Note: Works since v1.1.1

Usage

Important: Config path must be provided with at least one Microsoft Teams Connector. Check the options to know how you can supply it.

# To start the server (enable metrics, config file path , group alerts by, log file path, log level and Jinja2 template path are optional arguments):
$ prom2teams [--enablemetrics] [--configpath <config file path>] [--groupalertsby ("name"|"description"|"instance"|"severity"|"summary")] [--logfilepath <log file path>] [--loglevel (DEBUG|INFO|WARNING|ERROR|CRITICAL)] [--templatepath <Jinja2 template file path>]

# To show the help message:
$ prom2teams --help

Other options to start the service are:

export APP_CONFIG_FILE=<config file path>
$ prom2teams

Note: Grouping alerts works since v2.2.1

Docker image

Every new Prom2teams release, a new Docker image is built in our Dockerhub. We strongly recommend you to use the images with the version tag, though it will be possible to use them without it.

There are two things you need to bear in mind when creating a Prom2teams container:

  • The connector URL must be passed as the environment variable PROM2TEAMS_CONNECTOR
  • In case you want to group alerts, you need to pass the field as the environment variable PROM2TEAMS_GROUP_ALERTS_BY
  • You need to map container's Prom2teams port to one on your host.

So a sample Docker run command would be:

$ docker run -it -d -e PROM2TEAMS_GROUP_ALERTS_BY=FIELD_YOU_WANT_TO_GROUP_BY -e PROM2TEAMS_CONNECTOR="CONNECTOR_URL" -p 8089:8089 idealista/prom2teams:VERSION

Provide custom config file

If you prefer to use your own config file, you just need to provide it as a Docker volume to the container and map it to /opt/prom2teams/config.ini. Sample:

$ docker run -it -d -v pathToTheLocalConfigFile:/opt/prom2teams/config.ini -p 8089:8089 idealista/prom2teams:VERSION

Helm chart

Installing the Chart

To install the chart with the release name my-release run:

$ helm install --name my-release /location/of/prom2teams_ROOT/helm

After a few seconds, Prom2Teams should be running.

Tip: List all releases using helm list, a release is a name used to track a specific deployment

Uninstalling the Chart

To uninstall/delete the my-release deployment:

Helm 2
$ helm delete my-release

Tip: Use helm delete --purge my-release to completely remove the release from Helm internal storage

The command removes all the Kubernetes components associated with the chart and deletes the release.

Helm 3
$ helm uninstall my-release

The command removes all the Kubernetes components associated with the chart and deletes the release.

Configuration

The following table lists the configurable parameters of the Prom2teams chart and their default values.

ParameterDescriptionDefault
image.repositoryThe image repository to pull fromidealista/prom2teams
image.tagThe image tag to pull<empty>
image.pullPolicyThe image pull policyIfNotPresent
resources.requests.cpuCPU requested for being run in a node100m
resources.requests.memoryMemory requested for being run in a node128Mi
resources.limits.cpuCPU limit200m
resources.limits.memoryMemory limit200Mi
service.typeService Map (NodePort/ClusterIP)ClusterIP
service.portService Port8089
prom2teams.hostIP to bind to0.0.0.0
prom2teams.portPort to bind to8089
prom2teams.connectorConnector URL<empty>
prom2teams.connectorsA map where the keys are the connector names and the values are the connector webhook urls{}
prom2teams.group_alerts_byGroup_alerts_by field<empty>
prom2teams.loglevelLoglevelINFO
prom2teams.templatepathCustom Template path (files/teams.j2)/opt/prom2teams/helmconfig/teams.j2
prom2teams.configConfig (specific to Helm)/opt/prom2teams/helmconfig/config.ini
prom2teams.extraEnvDictionary of arbitrary additional environment variables for deployment (eg. HTTP_PROXY)<empty>

Production

For production environments you should prefer using a WSGI server. uWSGI dependency is installed for an easy usage. Some considerations must be taken to use it:

The binary prom2teams_uwsgi launches the app using the uwsgi server. Due to some incompatibilities with wheel you must install prom2teams using sudo pip install --no-binary :all: prom2teams (https://github.com/pypa/wheel/issues/92)

$ prom2teams_uwsgi <path to uwsgi ini config>

And uwsgi would look like:

[uwsgi]
master = true
processes = 5
#socket = 0.0.0.0:8001
#protocol = http
socket = /tmp/prom2teams.sock
chmod-socket = 777
vacuum = true
env = APP_ENVIRONMENT=pro
env = APP_CONFIG_FILE=/etc/default/prom2teams.ini

Consider not provide chdir property neither module property.

Also you can set the module file, by doing a symbolic link: sudo mkdir -p /usr/local/etc/prom2teams/ && sudo ln -sf /usr/local/lib/python3.7/dist-packages/usr/local/etc/prom2teams/wsgi.py /usr/local/etc/prom2teams/wsgi.py (check your dist-packages folder)

Another approach is to provide yourself the module file module example and the bin uwsgi call uwsgi example

Note: default log level is DEBUG. Messages are redirected to stdout. To enable file log, set the env APP_ENVIRONMENT=(pro|pre)

Config file

The config file is an INI file and should have the structure described below:

[Microsoft Teams]
# At least one connector is required here
Connector: <webhook url>
AnotherConnector: <webhook url>   
...

[HTTP Server]
Host: <host ip> # default: localhost
Port: <host port> # default: 8089

[Log]
Level: <loglevel (DEBUG|INFO|WARNING|ERROR|CRITICAL)> # default: DEBUG
Path: <log file path>  # default: /var/log/prom2teams/prom2teams.log

[Template]
Path: <Jinja2 template path> # default: app resources default template (./prom2teams/resources/templates/teams.j2)

[Group Alerts]
Field: <Field to group alerts by> # alerts won't be grouped by default

[Labels]
Excluded: <Comma separated list of labels to ignore>

[Annotations]
Excluded: <Comma separated list of annotations to ignore>

[Teams Client]
RequestTimeout: <Configures the request timeout> # defaults to 30 secs
RetryEnable: <Enables teams client retry policy> # defaults to false
RetryWaitTime: <Wait time between retries> # default: 60 secs
MaxPayload: <Teams client payload limit in bytes> # default: 24KB

Note: Grouping alerts works since v2.2.0

Configuring Prometheus

The webhook receiver in Prometheus allows configuring a prom2teams server.

The url is formed by the host and port defined in the previous step.

Note: In order to keep compatibility with previous versions, v2.0 keep attending the default connector ("Connector") in the endpoint 0.0.0.0:8089. This will be removed in future versions.

// The prom2teams endpoint to send HTTP POST requests to.
url: 0.0.0.0:8089/v2/<Connector1>

Prom2teams Prometheus metrics

Prom2teams uses Flask and, to have the service monitored, we use @rycus66's Prometheus Flask Exporter. This will enable an endpoint in /metrics where you could find interesting metrics to monitor such as number of responses with a certain status. To enable this endpoint, just either:

  • Use the --enablemetrics or -m flag when launching prom2teams.
  • Set the environment variable PROM2TEAMS_PROMETHEUS_METRICS=true.

Templating

prom2teams provides a default template built with Jinja2 to render messages in Microsoft Teams. This template could be overrided using the 'templatepath' argument ('--templatepath ') during the application start.

Some fields are considered mandatory when received from Alert Manager. If such a field is not included a default value of 'unknown' is assigned.

All non-mandatory labels not in excluded list are injected in extra_labels key. All non-mandatory annotations not in excluded list are injected in extra_annotations key.

Alertmanager fingerprints are available in the fingerprint key. Fingerprints are supported by Alertmanager 0.19.0 or greater.

Documentation

Swagger UI

Accessing to <Host>:<Port> (e.g. localhost:8089) in a web browser shows the API v1 documentation.

Swagger UI

Accessing to <Host>:<Port>/v2 (e.g. localhost:8089/v2) in a web browser shows the API v2 documentation.

Swagger UI

Testing

To run the test suite you should type the following:

// After cloning prom2teams :)
$ pip install -r requirements.txt
$ python3 -m unittest discover tests
$ cd tests/e2e
$ ./test.sh

Built With

Python 3.8.0 pip 9.0.1

Versioning

For the versions available, see the tags on this repository.

Additionaly you can see what change in each version in the CHANGELOG.md file.

Authors

See also the list of contributors who participated in this project.

License

Apache 2.0 License

This project is licensed under the Apache 2.0 license - see the LICENSE file for details.

Contributing

Please read CONTRIBUTING.md for details on our code of conduct, and the process for submitting pull requests to

项目侧边栏1项目侧边栏2
推荐项目
Project Cover

豆包MarsCode

豆包 MarsCode 是一款革命性的编程助手,通过AI技术提供代码补全、单测生成、代码解释和智能问答等功能,支持100+编程语言,与主流编辑器无缝集成,显著提升开发效率和代码质量。

Project Cover

AI写歌

Suno AI是一个革命性的AI音乐创作平台,能在短短30秒内帮助用户创作出一首完整的歌曲。无论是寻找创作灵感还是需要快速制作音乐,Suno AI都是音乐爱好者和专业人士的理想选择。

Project Cover

白日梦AI

白日梦AI提供专注于AI视频生成的多样化功能,包括文生视频、动态画面和形象生成等,帮助用户快速上手,创造专业级内容。

Project Cover

有言AI

有言平台提供一站式AIGC视频创作解决方案,通过智能技术简化视频制作流程。无论是企业宣传还是个人分享,有言都能帮助用户快速、轻松地制作出专业级别的视频内容。

Project Cover

Kimi

Kimi AI助手提供多语言对话支持,能够阅读和理解用户上传的文件内容,解析网页信息,并结合搜索结果为用户提供详尽的答案。无论是日常咨询还是专业问题,Kimi都能以友好、专业的方式提供帮助。

Project Cover

讯飞绘镜

讯飞绘镜是一个支持从创意到完整视频创作的智能平台,用户可以快速生成视频素材并创作独特的音乐视频和故事。平台提供多样化的主题和精选作品,帮助用户探索创意灵感。

Project Cover

讯飞文书

讯飞文书依托讯飞星火大模型,为文书写作者提供从素材筹备到稿件撰写及审稿的全程支持。通过录音智记和以稿写稿等功能,满足事务性工作的高频需求,帮助撰稿人节省精力,提高效率,优化工作与生活。

Project Cover

阿里绘蛙

绘蛙是阿里巴巴集团推出的革命性AI电商营销平台。利用尖端人工智能技术,为商家提供一键生成商品图和营销文案的服务,显著提升内容创作效率和营销效果。适用于淘宝、天猫等电商平台,让商品第一时间被种草。

Project Cover

AIWritePaper论文写作

AIWritePaper论文写作是一站式AI论文写作辅助工具,简化了选题、文献检索至论文撰写的整个过程。通过简单设定,平台可快速生成高质量论文大纲和全文,配合图表、参考文献等一应俱全,同时提供开题报告和答辩PPT等增值服务,保障数据安全,有效提升写作效率和论文质量。

投诉举报邮箱: service@vectorlightyear.com
@2024 懂AI·鲁ICP备2024100362号-6·鲁公网安备37021002001498号