Project Icon

AdvancedLiterateMachinery

赋予机器高级智能的先进读写系统

AdvancedLiterateMachinery是一个致力于构建高级智能系统的开源项目,旨在赋予机器阅读、思考和创造能力。项目由阿里巴巴集团同义实验室的读光OCR团队维护,涵盖文本识别、文档理解和信息提取等领域。目前,项目专注于开发从图像和文档中读取信息的技术,包含OmniParser、GEM和DocXChain等创新模型,推动人工智能技术的发展。

Advanced Literate Machinery

Introduction

The ultimate goal of our research is to build a system that has high-level intelligence, i.e., possessing the abilities to read, think and create, so advanced that it could even surpass human intelligence one day in the future. We name this kind of systems Advanced Literate Machinery (ALM).

To start with, we currently focus on teaching machines to read from images and documents. In years to come, we will explore the possibilities of endowing machines with the intellectual capabilities of thinking and creating, catching up with and surpassing GPT-4 and GPT-4V.

This project is maintained by the 读光 OCR Team (读光-Du Guang means “Reading The Light”) in the Tongyi Lab, Alibaba Group.

Logo

Visit our 读光-Du Guang Portal and DocMaster to experience online demos for OCR and Document Understanding.

Recent Updates

2024.4 Release

  • OmniParser (OmniParser: A Unified Framework for Text Spotting, Key Information Extraction and Table Recognition, CVPR 2024. paper): We propose a universal model for parsing visually-situated text across diverse scenarios, called OmniParser, which can simultaneously handle three typical visually-situated text parsing tasks: text spotting, key information extraction, and table recognition. In OmniParser, all tasks share the unified encoder-decoder architecture, the unified objective: point-conditioned text generation, and the unified input & output representation: prompt & structured sequences.

2024.3 Release

  • GEM (GEM: Gestalt Enhanced Markup Language Model for Web Understanding via Render Tree, EMNLP 2023. paper): Web pages serve as crucial carriers for humans to acquire and perceive information. Inspired by the Gestalt psychological theory, we propose an innovative Gestalt Enhanced Markup Language Model (GEM for short) for hosting heterogeneous visual information from render trees of web pages, leading to excellent performances on tasks such as web question answering and web information extraction.

2023.9 Release

  • DocXChain (DocXChain: A Powerful Open-Source Toolchain for Document Parsing and Beyond, arXiv 2023. report): To promote the level of digitization and structurization for documents, we develop and release an open-source toolchain, called DocXChain, for precise and detailed document parsing. Currently, basic capabilities, including text detection, text recognition, table structure recognition, and layout analysis, are provided. Also, typical pipelines, i.e., general text reading, table parsing, and document structurization, are built to support more complicated applications related to documents. Most of the algorithmic models are from ModelScope. Formula recognition (using models from RapidLatexOCR) and whole PDF conversion (PDF to JSON format) are now supported.
  • LISTER (LISTER: Neighbor Decoding for Length-Insensitive Scene Text Recognition, ICCV 2023. paper): We propose a method called Length-Insensitive Scene TExt Recognizer (LISTER), which remedies the limitation regarding the robustness to various text lengths. Specifically, a Neighbor Decoder is proposed to obtain accurate character attention maps with the assistance of a novel neighbor matrix regardless of the text lengths. Besides, a Feature Enhancement Module is devised to model the long-range dependency with low computation cost, which is able to perform iterations with the neighbor decoder to enhance the feature map progressively..
  • VGT (Vision Grid Transformer for Document Layout Analysis, ICCV 2023. paper): To fully leverage multi-modal information and exploit pre-training techniques to learn better representation for document layout analysis (DLA), we present VGT, a two-stream Vision Grid Transformer, in which Grid Transformer (GiT) is proposed and pre-trained for 2D token-level and segment-level semantic understanding. In addition, a new benchmark for assessing document layout analysis algorithms, called D^4LA, is curated and released.
  • VLPT-STD (Vision-Language Pre-Training for Boosting Scene Text Detectors, CVPR 2022. paper): We adapt vision-language joint learning for scene text detection, a task that intrinsically involves cross-modal interaction between the two modalities: vision and language. The pre-trained model is able to produce more informative representations with richer semantics, which could readily benefit existing scene text detectors (such as EAST and DB) in the down-stream text detection task.

2023.6 Release

  • LiteWeightOCR (Building A Mobile Text Recognizer via Truncated SVD-based Knowledge Distillation-Guided NAS, BMVC 2023. paper): To make OCR models deployable on mobile devices while keeping high accuracy, we propose a light-weight text recognizer that integrates Truncated Singular Value Decomposition (TSVD)-based Knowledge Distillation (KD) into the Neural Architecture Search (NAS) process.

2023.4 Release

  • GeoLayoutLM (GeoLayoutLM: Geometric Pre-training for Visual Information Extraction, CVPR 2023. paper): We propose a multi-modal framework, named GeoLayoutLM, for Visual Information Extraction (VIE). In contrast to previous methods for document pre-training, which usually learn geometric representation in an implicit way, GeoLayoutLM explicitly models the geometric relations of entities in documents.

2023.2 Release

  • LORE-TSR (LORE: Logical Location Regression Network for Table Structure Recognition, AAAI 2022. paper): We model Table Structure Recognition (TSR) as a logical location regression problem and propose a new algorithm called LORE, standing for LOgical location REgression network, which for the first time combines logical location regression together with spatial location regression of table cells.

2022.9 Release

  • MGP-STR (Multi-Granularity Prediction for Scene Text Recognition, ECCV 2022. paper): Based on ViT and a tailored Adaptive Addressing and Aggregation module, we explore an implicit way for incorporating linguistic knowledge by introducing subword representations to facilitate multi-granularity prediction and fusion in scene text recognition.
  • LevOCR (Levenshtein OCR, ECCV 2022. paper): Inspired by Levenshtein Transformer, we cast the problem of scene text recognition as an iterative sequence refinement process, which allows for parallel decoding, dynamic length change and good interpretability.
项目侧边栏1项目侧边栏2
推荐项目
Project Cover

豆包MarsCode

豆包 MarsCode 是一款革命性的编程助手,通过AI技术提供代码补全、单测生成、代码解释和智能问答等功能,支持100+编程语言,与主流编辑器无缝集成,显著提升开发效率和代码质量。

Project Cover

AI写歌

Suno AI是一个革命性的AI音乐创作平台,能在短短30秒内帮助用户创作出一首完整的歌曲。无论是寻找创作灵感还是需要快速制作音乐,Suno AI都是音乐爱好者和专业人士的理想选择。

Project Cover

白日梦AI

白日梦AI提供专注于AI视频生成的多样化功能,包括文生视频、动态画面和形象生成等,帮助用户快速上手,创造专业级内容。

Project Cover

有言AI

有言平台提供一站式AIGC视频创作解决方案,通过智能技术简化视频制作流程。无论是企业宣传还是个人分享,有言都能帮助用户快速、轻松地制作出专业级别的视频内容。

Project Cover

Kimi

Kimi AI助手提供多语言对话支持,能够阅读和理解用户上传的文件内容,解析网页信息,并结合搜索结果为用户提供详尽的答案。无论是日常咨询还是专业问题,Kimi都能以友好、专业的方式提供帮助。

Project Cover

讯飞绘镜

讯飞绘镜是一个支持从创意到完整视频创作的智能平台,用户可以快速生成视频素材并创作独特的音乐视频和故事。平台提供多样化的主题和精选作品,帮助用户探索创意灵感。

Project Cover

讯飞文书

讯飞文书依托讯飞星火大模型,为文书写作者提供从素材筹备到稿件撰写及审稿的全程支持。通过录音智记和以稿写稿等功能,满足事务性工作的高频需求,帮助撰稿人节省精力,提高效率,优化工作与生活。

Project Cover

阿里绘蛙

绘蛙是阿里巴巴集团推出的革命性AI电商营销平台。利用尖端人工智能技术,为商家提供一键生成商品图和营销文案的服务,显著提升内容创作效率和营销效果。适用于淘宝、天猫等电商平台,让商品第一时间被种草。

Project Cover

AIWritePaper论文写作

AIWritePaper论文写作是一站式AI论文写作辅助工具,简化了选题、文献检索至论文撰写的整个过程。通过简单设定,平台可快速生成高质量论文大纲和全文,配合图表、参考文献等一应俱全,同时提供开题报告和答辩PPT等增值服务,保障数据安全,有效提升写作效率和论文质量。

投诉举报邮箱: service@vectorlightyear.com
@2024 懂AI·鲁ICP备2024100362号-6·鲁公网安备37021002001498号