MelGAN

This project is about MelGAN, proposed in this paper.

Quick start

If you want run script and don't care you can run this commands in your terminal:

git clone https://github.com/PUSSYMIPT/MelGAN.git && cd MelGAN
pip install -r requirements/requirements.txt
sudo apt-get install libsndfile1 -y  # not necessary but sometimes required
bash bin/download_lj_speech.sh
export PYTHONPATH=$PYTHONPATH:.  # not necessary but sometimes required
bash bin/download_lj_speach.sh
python scripts/preprocess.py -d data/LJSpeech-1.1/wavs
catalyst-dl run -C configs/LJ_config.yml --verbose

Run experiment

First of all we need to install all required dependencies

$ pip install -r requirements/requirements.txt

Sometimes if librosa wasn't install we also need to install some other additional library

$ sudo apt-get install libsndfile1 -y

Also we need to download and preprocess dataset. For example LJ1.1.

bash bin/download_lj_speech.sh
export PYTHONPATH=$PYTHONPATH:.  # not necessary but sometimes required
bash bin/download_lj_speach.sh
python scripts/preprocess.py -d data/LJSpeech-1.1/wavs

Config API

The most product-ready and kinda professional way to run script in catalyst is to run it via config API. You need to write your config.yml file and download your dataset. Then just run

$ catalyst-dl run -C PATH_TO_CONFIG

Also we can run in distributed or/and apex mode:

$ catalyst-dl run -C PATH_TO_CONFIG --distributed --apex

Reproducibility guaranteed only in non-distributed mode.

Notebook API

If you want to create something new and run small experiment to check your hypothesis you can use notebook API. Here is a minimal example.

from collections import OrderedDict

from catalyst import dl
from src.callbacks.discriminator_loss_callback import DiscriminatorLossCallback
from src.callbacks.generator_loss_callback import GeneratorLossCallback
from src.data.dataset import MelFromDisk
from src.models import Discriminator, Generator
from src.runner import MelGANRunner
import torch


def main():
    """Test Notebook API"""
    dataset = MelFromDisk(path="data/test")
    dataloader = torch.utils.data.DataLoader(dataset)
    loaders = OrderedDict({"train": dataloader})
    generator = Generator(80)
    discriminator = Discriminator()

    model = torch.nn.ModuleDict(
        {"generator": generator, "discriminator": discriminator}
    )
    optimizer = {
        "opt_g": torch.optim.Adam(generator.parameters()),
        "opt_d": torch.optim.Adam(discriminator.parameters()),
    }
    callbacks = {
        "loss_g": GeneratorLossCallback(),
        "loss_d": DiscriminatorLossCallback(),
        "o_g": dl.OptimizerCallback(
            metric_key="generator_loss", optimizer_key="opt_g"
        ),
        "o_d": dl.OptimizerCallback(
            metric_key="discriminator_loss", optimizer_key="opt_d"
        ),
    }
    runner = MelGANRunner()

    runner.train(
        model=model,
        loaders=loaders,
        optimizer=optimizer,
        callbacks=callbacks,
        check=True,
        main_metric="discriminator_loss",
    )

Last updated