T-Pot - The All In One Multi Honeypot Platform
T-Pot is the all in one, optionally distributed, multiarch (amd64, arm64) honeypot plattform, supporting 20+ honeypots and countless visualization options using the Elastic Stack, animated live attack maps and lots of security tools to further improve the deception experience.
TL;DR
- Meet the system requirements. The T-Pot installation needs at least 8-16 GB RAM, 128 GB free disk space as well as a working (outgoing non-filtered) internet connection.
- Download or use a running, supported distribution.
- Install the ISO with as minimal packages / services as possible (
ssh
required) - Install
curl
:$ sudo [apt, dnf, zypper] install curl
if not installed already - Run installer as non-root from
$HOME
:
env bash -c "$(curl -sL https://github.com/telekom-security/tpotce/raw/master/install.sh)"
- Follow instructions, read messages, check for possible port conflicts and reboot
- T-Pot - The All In One Multi Honeypot Platform
- TL;DR
- Disclaimer
- Technical Concept
- System Requirements
- System Placement
- Installation
- First Start
- Remote Access and Tools
- Configuration
- Maintenance
- Troubleshooting
- Contact
- Licenses
- Credits
- Testimonials
Disclaimer
- You install and run T-Pot within your responsibility. Choose your deployment wisely as a system compromise can never be ruled out.
- For fast help research the Issues and Discussions.
- The software is designed and offered with best effort in mind. As a community and open source project it uses lots of other open source software and may contain bugs and issues. Report responsibly.
- Honeypots - by design - should not host any sensitive data. Make sure you don't add any.
- By default, your data is submitted to Sicherheitstacho. You can disable this in the config (
~/tpotce/docker-compose.yml
) by removing theewsposter
section. But in this case sharing really is caring!
Technical Concept
T-Pot's main components have been moved into the tpotinit
Docker image allowing T-Pot to now support multiple Linux distributions, even macOS and Windows (although both limited to the feature set of Docker Desktop). T-Pot uses docker and docker compose to reach its goal of running as many honeypots and tools as possible simultaneously and thus utilizing the host's hardware to its maximum.
T-Pot offers docker images for the following honeypots ...
- adbhoney,
- ciscoasa,
- citrixhoneypot,
- conpot,
- cowrie,
- ddospot,
- dicompot,
- dionaea,
- elasticpot,
- endlessh,
- glutton,
- hellpot,
- heralding,
- honeypots,
- honeytrap,
- ipphoney,
- log4pot,
- mailoney,
- medpot,
- redishoneypot,
- sentrypeer,
- snare,
- tanner,
- wordpot
... alongside the following tools ...
- Autoheal a tool to automatically restart containers with failed healthchecks.
- Cyberchef a web app for encryption, encoding, compression and data analysis.
- Elastic Stack to beautifully visualize all the events captured by T-Pot.
- Elasticvue a web front end for browsing and interacting with an Elasticsearch cluster.
- Fatt a pyshark based script for extracting network metadata and fingerprints from pcap files and live network traffic.
- T-Pot-Attack-Map a beautifully animated attack map for T-Pot.
- P0f is a tool for purely passive traffic fingerprinting.
- Spiderfoot an open source intelligence automation tool.
- Suricata a Network Security Monitoring engine.
... to give you the best out-of-the-box experience possible and an easy-to-use multi-honeypot system.
Technical Architecture
The source code and configuration files are fully stored in the T-Pot GitHub repository. The docker images are built and preconfigured for the T-Pot environment.
The individual Dockerfiles and configurations are located in the docker folder.
Services
T-Pot offers a number of services which are basically divided into five groups:
- System services provided by the OS
- SSH for secure remote access.
- Elastic Stack
- Elasticsearch for storing events.
- Logstash for ingesting, receiving and sending events to Elasticsearch.
- Kibana for displaying events on beautifully rendered dashboards.
- Tools
- NGINX provides secure remote access (reverse proxy) to Kibana, CyberChef, Elasticvue, GeoIP AttackMap, Spiderfoot and allows for T-Pot sensors to securely transmit event data to the T-Pot hive.
- CyberChef a web app for encryption, encoding, compression and data analysis.
- Elasticvue a web front end for browsing and interacting with an Elasticsearch cluster.
- T-Pot Attack Map a beautifully animated attack map for T-Pot.
- Spiderfoot an open source intelligence automation tool.
- Honeypots
- A selection of the 23 available honeypots based on the selected
docker-compose.yml
.
- A selection of the 23 available honeypots based on the selected
- Network Security Monitoring (NSM)
- Fatt a pyshark based script for extracting network metadata and fingerprints from pcap files and live network traffic.
- P0f is a tool for purely passive traffic fingerprinting.
- Suricata a Network Security Monitoring engine.
User Types
During the installation and during the usage of T-Pot there are two different types of accounts you will be working with. Make sure you know the differences of the different account types, since it is by far the most common reason for authentication errors.
Service | Account Type | Username / Group | Description |
---|---|---|---|
SSH | OS | <OS_USERNAME> | The user you chose during the installation of the OS. |
Nginx | BasicAuth | <WEB_USER> | <web_user> you chose during the installation of T-Pot. |
CyberChef | BasicAuth | <WEB_USER> | <web_user> you chose during the installation of T-Pot. |
Elasticvue | BasicAuth | <WEB_USER> | <web_user> you chose during the installation of T-Pot. |
Geoip Attack Map | BasicAuth | <WEB_USER> | <web_user> you chose during the installation of T-Pot. |
Spiderfoot | BasicAuth | <WEB_USER> | <web_user> you chose during the installation of T-Pot. |
T-Pot | OS | tpot | tpot this user / group is always reserved by the T-Pot services. |
T-Pot Logs | BasicAuth | <LS_WEB_USER> | LS_WEB_USER are automatically managed. |
System Requirements
Depending on the supported Linux distro images, hive / sensor, installing on real hardware, in a virtual machine or other environments there are different kind of requirements to be met regarding OS, RAM, storage and network for a successful installation of T-Pot (you can always adjust ~/tpotce/docker-compose.yml
and ~/tpotce/.env
to your needs to overcome these requirements).
T-Pot Type | RAM | Storage | Description |
---|---|---|---|
Hive | 16GB | 256GB SSD | As a rule of thumb, the more sensors & data, the more RAM and storage is needed. |
Sensor | 8GB | 128GB SSD | Since honeypot logs are persisted (~/tpotce/data) for 30 days, storage depends on attack volume. |
T-Pot does require ...
- an IPv4 address via DHCP or statically assigned
- a working, non-proxied, internet connection
... for a successful installation and operation.
If you need proxy support or otherwise non-standard features, you should check the docs of the supported Linux distro images and / or the Docker documentation.
Running in a VM
All of the supported Linux distro images will run in a VM which means T-Pot will just run fine. The following were tested / reported to work:
- UTM (Intel & Apple Silicon)
- VirtualBox
- VMWare Fusion and VMWare Workstation
- KVM is reported to work as well.
Some configuration / setup hints:
- While Intel versions run stable, Apple Silicon (arm64) support has known issues which in UTM may require switching
Display
toConsole Only
during initial installation of the OS and afterwards back toFull Graphics
. - During configuration you may need to enable promiscuous mode for the network interface in order for fatt, suricata and p0f to work properly.
- If you want to use a wifi card as a primary NIC for T-Pot, please be aware that not all network interface drivers support all wireless cards. In VirtualBox e.g. you have to choose the "MT SERVER" model of the NIC.
Running on Hardware
T-Pot is only limited by the hardware support of the supported Linux distro images. It is recommended to check the HCL (hardware compatibility list) and test the supported distros with T-Pot before investing in dedicated hardware.
Running in a Cloud
T-Pot is tested on and known to run on ...
- Telekom OTC using the post install method ... others may work, but remain untested.
Some users report working installations on other clouds and hosters, i.e. Azure and GCP. Hardware requirements may be different. If you are unsure you should research issues and discussions and run some functional tests. With T-Pot 24.04.0 and forward we made sure to remove settings that were known to interfere with cloud based installations.
Required Ports
Besides the ports generally needed by the OS, i.e. obtaining a DHCP lease, DNS, etc. T-Pot will require the following ports for incoming / outgoing connections. Review the T-Pot Architecture for a visual representation. Also some ports will show up as duplicates, which is fine since used in different editions.
Port | Protocol | Direction | Description |
---|---|---|---|
80, 443 | tcp | outgoing | T-Pot Management: Install, Updates, Logs (i.e. OS, GitHub, DockerHub, Sicherheitstacho, etc. |
64294 | tcp | incoming | T-Pot Management: Sensor data transmission to hive (through NGINX reverse proxy) to 127.0.0.1:64305 |
64295 | tcp | incoming | T-Pot Management: Access to SSH |
64297 | tcp | incoming | T-Pot Management Access to NGINX reverse proxy |
5555 | tcp | incoming | Honeypot: ADBHoney |
5000 | udp | incoming | Honeypot: CiscoASA |
8443 | tcp | incoming |