You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+27-15Lines changed: 27 additions & 15 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,44 +12,55 @@
12
12
Welcome to the official repository of StructEqTable-Deploy, a solution that converts images of Table into LaTeX, powered by scalable data from [DocGenome benchmark](https://unimodal4reasoning.github.io/DocGenome_page/).
13
13
14
14
15
-
## Abstract
15
+
## Overview
16
16
Table is an effective way to represent structured data in scientific publications, financial statements, invoices, web pages, and many other scenarios. Extracting tabular data from a visual table image and performing the downstream reasoning tasks according to the extracted data is challenging, mainly due to that tables often present complicated column and row headers with spanning cell operation. To address these challenges, we present TableX, a large-scale multi-modal table benchmark extracted from [DocGenome benchmark](https://unimodal4reasoning.github.io/DocGenome_page/) for table pre-training, comprising more than 2 million high-quality Image-LaTeX pair data covering 156 disciplinary classes. Besides, benefiting from such large-scale data, we train an end-to-end model, StructEqTable, which provides the capability to precisely obtain the corresponding LaTeX description from a visual table image and perform multiple table-related reasoning tasks, including structural extraction and question answering, broadening its application scope and potential.
17
17
18
-
## Release
19
-
-[2024/7/30] 🔥 We have released the first version of StructEqTable. (Current version of StructEqTable is able to process table images from scientific documents such as arXiv, Scihub papers. Times New Roman And Songti(宋体) are main fonts used in table image, other fonts may decrease the accuracy of the model's output.)
18
+
## Changelog
19
+
Tips: Current version of StructEqTable is able to process table images from scientific documents such as arXiv, Scihub papers. Times New Roman And Songti(宋体) are main fonts used in table image, other fonts may decrease the accuracy of the model's output.
20
+
-[2024/8/08] 🔥 We have released the TensorRT accelerated version, which only takes about 1 second for most images on GPU A100. Please follow the tutorial to install the environment and compile the model weights.
21
+
-[2024/7/30] We have released the first version of StructEqTable.
20
22
21
23
## TODO
22
24
23
25
-[x] Release inference code and checkpoints of StructEqTable.
24
26
-[x] Support Chinese version of StructEqTable.
25
-
-[] Accelerated version of StructEqTable using TensorRT-LLM.
27
+
-[x] Accelerated version of StructEqTable using TensorRT-LLM.
26
28
-[ ] Expand more domains of table image to improve the model's general capabilities.
27
29
-[ ] Release our table pre-training and fine-tuning code
28
30
31
+
## Efficient Inference
32
+
Our model now supports TensorRT-LLM deployment, achieving a 10x or more speedup in during inference.
33
+
Please refer to [GETTING_STARTED.md](docs/GETTING_STARTED.md) to learn how to depoly.
[TensorRT-LLM](https://github.com/NVIDIA/TensorRT-LLM) is used for model inference speeding up.
3
+
4
+
5
+
### 1. Conda or Python Environment Preparation
6
+
* Please follow the step 1, 2 from the [official tutorial](https://nvidia.github.io/TensorRT-LLM/installation/linux.html) of TensorRT-LLM to install the environment.
7
+
```bash
8
+
# Installing on Linux
9
+
10
+
Step 1. Retrieve and launch the docker container (optional).
11
+
12
+
You can pre-install the environment using the [NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit) to avoid manual environment configuration.
13
+
14
+
```bash
15
+
# Obtain and start the basic docker image environment (optional).
16
+
docker run --rm --ipc=host --runtime=nvidia --gpus all --entrypoint /bin/bash -it nvidia/cuda:12.4.1-devel-ubuntu22.04
17
+
```
18
+
Note: please make sure to set`--ipc=host` as a docker run argument to avoid `Bus error (core dumped)`.
Please note that TensorRT-LLM depends on TensorRT. In earlier versions that include TensorRT 8,
36
+
overwriting an upgraded to a new version may require explicitly running `pip uninstall tensorrt`
37
+
to uninstall the old version.
38
+
```
39
+
* Once you successfully execute `python3 -c "import tensorrt_llm"`, it means that you have completed Environment Preparation.
40
+
41
+
Tips: If you want to install the environment manually, please note that the version of Python require >= 3.10
42
+
43
+
44
+
### 2. Model Compilation
45
+
You can refer to the [official tutorial](https://nvidia.github.io/TensorRT-LLM/quick-start-guide.html) to complete the model compilation, or follow our instructions and use the provided scripts to implement it.
0 commit comments