# VideoSys **Repository Path**: sandlake/VideoSys ## Basic Information - **Project Name**: VideoSys - **Description**: VideoSysιε - **Primary Language**: Python - **License**: Not specified - **Default Branch**: master - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2024-10-07 - **Last Updated**: 2024-10-27 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README
| Quick Start | Supported Models | Accelerations | Discord | Media | HuggingFace Space |
### Latest News π₯ - [2024/09] Support [CogVideoX](https://github.com/THUDM/CogVideo), [Vchitect-2.0](https://github.com/Vchitect/Vchitect-2.0) and [Open-Sora-Plan v1.2.0](https://github.com/PKU-YuanGroup/Open-Sora-Plan). - [2024/08] π₯ Evole from [OpenDiT](https://github.com/NUS-HPC-AI-Lab/VideoSys/tree/v1.0.0) to VideoSys: An easy and efficient system for video generation. - [2024/08] π₯ Release PAB paper: [Real-Time Video Generation with Pyramid Attention Broadcast](https://arxiv.org/abs/2408.12588). - [2024/06] π₯ Propose Pyramid Attention Broadcast (PAB)[[paper](https://arxiv.org/abs/2408.12588)][[blog](https://oahzxl.github.io/PAB/)][[doc](./docs/pab.md)], the first approach to achieve real-time DiT-based video generation, delivering negligible quality loss without requiring any training. - [2024/06] Support [Open-Sora-Plan](https://github.com/PKU-YuanGroup/Open-Sora-Plan) and [Latte](https://github.com/Vchitect/Latte). - [2024/03] π₯ Propose Dynamic Sequence Parallel (DSP)[[paper](https://arxiv.org/abs/2403.10266)][[doc](./docs/dsp.md)], achieves **3x** speed for training and **2x** speed for inference in Open-Sora compared with sota sequence parallelism. - [2024/03] Support [Open-Sora](https://github.com/hpcaitech/Open-Sora). - [2024/02] π Release [OpenDiT](https://github.com/NUS-HPC-AI-Lab/VideoSys/tree/v1.0.0): An Easy, Fast and Memory-Efficent System for DiT Training and Inference. # About VideoSys is an open-source project that provides a user-friendly and high-performance infrastructure for video generation. This comprehensive toolkit will support the entire pipeline from training and inference to serving and compression. We are committed to continually integrating cutting-edge open-source video models and techniques. Stay tuned for exciting enhancements and new features on the horizon! ## Installation Prerequisites: - Python >= 3.10 - PyTorch >= 1.13 (We recommend to use a >2.0 version) - CUDA >= 11.6 We strongly recommend using Anaconda to create a new environment (Python >= 3.10) to run our examples: ```shell conda create -n videosys python=3.10 -y conda activate videosys ``` Install VideoSys: ```shell git clone https://github.com/NUS-HPC-AI-Lab/VideoSys cd VideoSys pip install -e . ``` ## Usage VideoSys supports many diffusion models with our various acceleration techniques, enabling these models to run faster and consume less memory. You can find all available models and their supported acceleration techniques in the following table. Click `Code` to see how to use them.| Model | Train | Infer | Acceleration Techniques | Usage | |
|---|---|---|---|---|---|
| DSP | PAB | ||||
| Vchitect [source] | / | β | β | β | Code |
| CogVideoX [source] | / | β | / | β | Code |
| Latte [source] | / | β | β | β | Code |
| Open-Sora-Plan [source] | / | β | β | β | Code |
| Open-Sora [source] | π‘ | β | β | β | Code |