Improve MeanVC model card: Update pipeline tag and add sample usage
Browse filesThis PR significantly improves the model card for `MeanVC` by:
- **Correcting Metadata**: The `pipeline_tag` is updated from the incorrect `text-to-speech` to `audio-to-audio` to accurately reflect its functionality as a voice conversion model. The erroneous `text-to-speech` tag is also removed.
- **Adding Comprehensive Sample Usage**: A new "Sample Usage" section is added, directly incorporating the detailed "Getting Started" instructions (environment setup, model download, real-time and offline conversion code snippets) from the project's GitHub README. This makes it much easier for users to interact with and use the model.
- **Updating Image Paths**: Relative image paths (`figs/model.png`, `figs/[email protected]`) are updated to absolute URLs on the Hugging Face Hub (`https://huggingface.co/ASLP-lab/MeanVC/resolve/main/figs/model.png`) for improved robustness and rendering.
These changes enhance the model's discoverability and provide users with clearer, more actionable information.
|
@@ -1,81 +1,109 @@
|
|
| 1 |
-
---
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
[](https://arxiv.org/pdf/2510.08392)
|
| 14 |
+
[](https://github.com/ASLP-lab/MeanVC)
|
| 15 |
+
[](https://aslp-lab.github.io/MeanVC/)
|
| 16 |
+
|
| 17 |
+
</div>
|
| 18 |
+
|
| 19 |
+
**MeanVC** is a lightweight and streaming zero-shot voice conversion system that enables real-time timbre transfer from any source speaker to any target speaker while preserving linguistic content. The system introduces a diffusion transformer with a chunk-wise autoregressive denoising strategy and mean flows for efficient single-step inference.
|
| 20 |
+
|
| 21 |
+

|
| 22 |
+
|
| 23 |
+
## β¨ Key Features
|
| 24 |
+
|
| 25 |
+
- **π Streaming Inference**: Real-time voice conversion with chunk-wise processing.
|
| 26 |
+
- **β‘ Single-Step Generation**: Direct mapping from start to endpoint via mean flows for fast generation.
|
| 27 |
+
- **π― Zero-Shot Capability**: Convert to any unseen target speaker without re-training.
|
| 28 |
+
- **πΎ Lightweight**: Significantly fewer parameters than existing methods.
|
| 29 |
+
- **π High Fidelity**: Superior speech quality and speaker similarity.
|
| 30 |
+
|
| 31 |
+
## π» Sample Usage
|
| 32 |
+
|
| 33 |
+
### 1. Environment Setup
|
| 34 |
+
First, follow these steps to clone the repository and install the required environment.
|
| 35 |
+
|
| 36 |
+
```bash
|
| 37 |
+
# Clone the repository and enter the directory
|
| 38 |
+
git clone https://github.com/ASLP-lab/MeanVC.git
|
| 39 |
+
cd MeanVC
|
| 40 |
+
|
| 41 |
+
# Create and activate a Conda environment
|
| 42 |
+
conda create -n meanvc python=3.11 -y
|
| 43 |
+
conda activate meanvc
|
| 44 |
+
|
| 45 |
+
# Install dependencies
|
| 46 |
+
pip install -r requirements.txt
|
| 47 |
+
```
|
| 48 |
+
|
| 49 |
+
### 2. Download Pre-trained Models
|
| 50 |
+
Run the provided script to automatically download all necessary pre-trained models.
|
| 51 |
+
|
| 52 |
+
```bash
|
| 53 |
+
python download_ckpt.py
|
| 54 |
+
```
|
| 55 |
+
|
| 56 |
+
This will download the main VC model, vocoder, and ASR model into the `src/ckpt/` directories.
|
| 57 |
+
The speaker verification model (`wavlm_large_finetune.pth`) must be downloaded manually from Google Drive. Download the file from [this link](https://drive.google.com/file/d/1-aE1NfzpRCLxA4GUxX9ITI3F9LlbtEGP/view). Place the downloaded `wavlm_large_finetune.pth` file into the `src/runtime/speaker_verification/ckpt/` directory.
|
| 58 |
+
|
| 59 |
+
### 3. Real-Time Voice Conversion
|
| 60 |
+
This script captures audio from your microphone and converts it in real-time to the voice of a target speaker.
|
| 61 |
+
|
| 62 |
+
```bash
|
| 63 |
+
python src/runtime/run_rt.py --target-path "path/to/target_voice.wav"
|
| 64 |
+
```
|
| 65 |
+
|
| 66 |
+
- `--target-path`: Path to a clean audio file of the target speaker. This voice will be used as the conversion target. An example file is provided at `src/runtime/example/test.wav`.
|
| 67 |
+
|
| 68 |
+
When you run the script, you will be prompted to select your audio input (microphone) and output (speaker) devices from a list.
|
| 69 |
+
|
| 70 |
+
### 4. Offline Voice Conversion
|
| 71 |
+
For batch processing or converting pre-recorded audio files, use the offline conversion script.
|
| 72 |
+
|
| 73 |
+
```bash
|
| 74 |
+
bash scripts/infer_ref.sh
|
| 75 |
+
```
|
| 76 |
+
|
| 77 |
+
Before running the script, you need to configure the following paths in `scripts/infer_ref.sh`:
|
| 78 |
+
|
| 79 |
+
- `source_path`: Path to the source audio file or directory containing multiple audio files to be converted
|
| 80 |
+
- `reference_path`: Path to a clean audio file of the target speaker (used as voice reference)
|
| 81 |
+
- `output_dir`: Directory where converted audio files will be saved (default: `src/outputs`)
|
| 82 |
+
- `steps`: Number of denoising steps (default: 2)
|
| 83 |
+
|
| 84 |
+
## π License & Disclaimer
|
| 85 |
+
|
| 86 |
+
MeanVC is released under the Apache License 2.0. This open-source license allows you to freely use, modify, and distribute the model, as long as you include the appropriate copyright notice and disclaimer.
|
| 87 |
+
|
| 88 |
+
MeanVC is designed for research and legitimate applications in voice conversion technology. Users must obtain proper consent from individuals whose voices are being converted or used as references. We strongly discourage any malicious use including impersonation, fraud, or creating misleading audio content. Users are solely responsible for ensuring their use cases comply with ethical standards and legal requirements.
|
| 89 |
+
|
| 90 |
+
## π Citation
|
| 91 |
+
|
| 92 |
+
If you find our work helpful, please cite our paper:
|
| 93 |
+
|
| 94 |
+
```bibtex
|
| 95 |
+
@article{ma2025meanvc,
|
| 96 |
+
title={MeanVC: Lightweight and Streaming Zero-Shot Voice Conversion via Mean Flows},
|
| 97 |
+
author={Ma, Guobin and Yao, Jixun and Ning, Ziqian and Jiang, Yuepeng and Xiong, Lingxin and Xie, Lei and Zhu, Pengcheng},
|
| 98 |
+
journal={arXiv preprint arXiv:2510.08392},
|
| 99 |
+
year={2025}
|
| 100 |
+
}
|
| 101 |
+
```
|
| 102 |
+
|
| 103 |
+
## π§ Contact
|
| 104 |
+
|
| 105 |
+
If you are interested in leaving a message to our research team, feel free to email [email protected]
|
| 106 |
+
|
| 107 |
+
<p align="center">
|
| 108 |
+
<img src="https://huggingface.co/ASLP-lab/MeanVC/resolve/main/figs/[email protected]" width="500"/>
|
| 109 |
+
</p>
|