# FlashVideo
**Repository Path**: cntony/FlashVideo
## Basic Information
- **Project Name**: FlashVideo
- **Description**: No description available
- **Primary Language**: Unknown
- **License**: Apache-2.0
- **Default Branch**: main
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 0
- **Forks**: 0
- **Created**: 2025-02-20
- **Last Updated**: 2025-02-20
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README
# Flowing Fidelity to Detail for Efficient High-Resolution Video Generation
[](https://arxiv.org/abs/2502.05179)
[](https://jshilong.github.io/flashvideo-page/)
> [**FlashVideo:Flowing Fidelity to Detail for Efficient High-Resolution Video Generation**](https://arxiv.org/abs/)
> [Shilong Zhang](https://jshilong.github.io/), [Wenbo Li](https://scholar.google.com/citations?user=foGn_TIAAAAJ&hl=en), [Shoufa Chen](https://www.shoufachen.com/), [Chongjian Ge](https://chongjiange.github.io/), [Peize Sun](https://peizesun.github.io/),
[Yida Zhang](<>), [Yi Jiang](https://enjoyyi.github.io/), [Zehuan Yuan](https://shallowyuan.github.io/), [Bingyue Peng](<>), [Ping Luo](http://luoping.me/),
>
HKU, CUHK, ByteDance
## ๐ค More video examples ๐ can be accessed at the [](https://jshilong.github.io/flashvideo-page/)
#### โกโก User Prompt to 270p, NFE = 50, Takes ~30s โกโก
#### โกโก 270p to 1080p , NFE = 4, Takes ~72s โกโก
[![]()](https://github.com/FoundationVision/flashvideo-page/blob/main/static/images/output.gif)
## ๐ฅ Update
- \[2025.02.10\] ๐ฅ ๐ฅ ๐ฅ Inference code and both stage model [weights](https://huggingface.co/FoundationVision/FlashVideo/tree/main) have been released.
## ๐ฟ Introduction
In this repository, we provide:
- [x] The stage-I weight for 270P video generation.
- [x] The stage-II for enhancing 270P video to 1080P.
- [x] Inference code of both stages.
- [ ] Training code and related augmentation. Work in process [PR#12](https://github.com/FoundationVision/FlashVideo/pull/12)
- [x] Loss function
- [ ] Dataset and augmentation
- [ ] Configuration and training script
- [ ] Implementation with diffusers.
- [ ] Gradio.
## Install
### 1. Environment Setup
This repository is tested with PyTorch 2.4.0+cu121 and Python 3.11.11. You can install the necessary dependencies using the following command:
```shell
pip install -r requirements.txt
```
### 2. Preparing the Checkpoints
To get the 3D VAE (identical to CogVideoX), along with Stage-I and Stage-II weights, set them up as follows:
```shell
cd FlashVideo
mkdir -p ./checkpoints
huggingface-cli download --local-dir ./checkpoints FoundationVision/FlashVideo
```
The checkpoints should be organized as shown below:
```
โโโ 3d-vae.pt
โโโ stage1.pt
โโโ stage2.pt
```
## ๐ Text to Video Generation
#### โ ๏ธ IMPORTANT NOTICE โ ๏ธ : Both stage-I and stage-II are trained with long prompts only. For achieving the best results, include comprehensive and detailed descriptions in your prompts, akin to the example provided in [example.txt](./example.txt).
### Jupyter Notebook
You can conveniently provide user prompts in our Jupyter notebook. The default configuration for spatial and temporal slices in the VAE Decoder is tailored for an 80G GPU. For GPUs with less memory, one might consider increasing the [spatial and temporal slice](https://github.com/FoundationVision/FlashVideo/blob/400a9c1ef905eab3a1cb6b9f5a5a4c331378e4b5/sat/utils.py#L110).
```python
flashvideo/demo.ipynb
```
### Inferring from a Text File Containing Prompts
You can conveniently provide the user prompt in a text file and generate videos with multiple gpus.
```python
bash inf_270_1080p.sh
```
## License
This project is developed based on [CogVideoX](https://github.com/THUDM/CogVideo). Please refer to their original [license](https://github.com/THUDM/CogVideo?tab=readme-ov-file#model-license) for usage details.
## BibTeX
```bibtex
@article{zhang2025flashvideo,
title={FlashVideo: Flowing Fidelity to Detail for Efficient High-Resolution Video Generation},
author={Zhang, Shilong and Li, Wenbo and Chen, Shoufa and Ge, Chongjian and Sun, Peize and Zhang, Yida and Jiang, Yi and Yuan, Zehuan and Peng, Binyue and Luo, Ping},
journal={arXiv preprint arXiv:2502.05179},
year={2025}
}
```