mirror of
https://github.com/thorstenMueller/Thorsten-Voice.git
synced 2024-11-21 23:43:12 +01:00
Dockerfile draft for NVIDIA Jetson Xavier AGX and Coqui
This commit is contained in:
parent
3e09ae8615
commit
f505fd38df
44
helperScripts/Dockerfile.Jetson-Coqui
Normal file
44
helperScripts/Dockerfile.Jetson-Coqui
Normal file
@ -0,0 +1,44 @@
|
|||||||
|
# Dockerfile for running Coqui TTS trainings in a docker container on NVIDIA Jetson platofrm.
|
||||||
|
# Based on NVIDIA Jetson ML Image, provided without any warranty as is by Thorsten Müller (https://twitter.com/ThorstenVoice) in august 2021
|
||||||
|
|
||||||
|
FROM nvcr.io/nvidia/l4t-ml:r32.5.0-py3
|
||||||
|
|
||||||
|
RUN echo "deb https://repo.download.nvidia.com/jetson/common r32.4 main" >> /etc/apt/sources.list.d/nvidia-l4t-apt-source.list
|
||||||
|
RUN echo "deb https://repo.download.nvidia.com/jetson/t194 r32.4 main" >> /etc/apt/sources.list.d/nvidia-l4t-apt-source.list
|
||||||
|
|
||||||
|
RUN apt-get update -y
|
||||||
|
RUN apt-get install vim python-mecab libmecab-dev cuda-toolkit-10-2 libcudnn8 libcudnn8-dev libsndfile1-dev -y
|
||||||
|
|
||||||
|
# Setting some environment vars
|
||||||
|
ENV LLVM_CONFIG=/usr/bin/llvm-config-9
|
||||||
|
ENV PYTHONPATH=/coqui/TTS/
|
||||||
|
ENV LD_LIBRARY_PATH=/usr/local/cuda-10.2/lib64:$LD_LIBRARY_PATH
|
||||||
|
# Skipping OPENBLAS_CORETYPE might show "Illegal instruction (core dumped) error
|
||||||
|
ENV OPENBLAS_CORETYPE=ARMV8
|
||||||
|
|
||||||
|
ENV NVIDIA_VISIBLE_DEVICES all
|
||||||
|
ENV NVIDIA_DRIVER_CAPABILITIES compute,utility
|
||||||
|
LABEL com.nvidia.volumes.needed="nvidia_driver"
|
||||||
|
|
||||||
|
RUN mkdir /coqui
|
||||||
|
WORKDIR /coqui
|
||||||
|
|
||||||
|
ARG COQUI_BRANCH
|
||||||
|
RUN git clone -b ${COQUI_BRANCH} https://github.com/coqui-ai/TTS.git
|
||||||
|
WORKDIR /coqui/TTS
|
||||||
|
RUN pip3 install pip setuptools wheel --upgrade
|
||||||
|
RUN pip uninstall -y tensorboard tensorflow tensorflow-estimator nbconvert matplotlib
|
||||||
|
RUN pip install -r requirements.txt
|
||||||
|
RUN python3 ./setup.py develop
|
||||||
|
|
||||||
|
# Jupyter Notebook
|
||||||
|
RUN python3 -c "from notebook.auth.security import set_password; set_password('nvidia', '/root/.jupyter/jupyter_notebook_config.json')"
|
||||||
|
CMD /bin/bash -c "jupyter lab --ip 0.0.0.0 --port 8888 --allow-root"
|
||||||
|
|
||||||
|
|
||||||
|
# Build example:
|
||||||
|
# nvidia-docker build . -f Dockerfile.Jetson-Coqui --build-arg COQUI_BRANCH=v0.1.3 -t jetson-coqui
|
||||||
|
# Run example:
|
||||||
|
# nvidia-docker run -p 8888:8888 -d --shm-size 32g --gpus all -v /ssd/___prj/tts/dataset-july21:/coqui/TTS/data jetson-coqui
|
||||||
|
# Bash example:
|
||||||
|
# nvidia-docker exec -it <containerId> /bin/bash
|
@ -4,4 +4,24 @@
|
|||||||
Python script which takes recordings (filesystem and sqlite db) done with Mycroft Mimic-Recording-Studio (https://github.com/MycroftAI/mimic-recording-studio) and creates an audio optimized dataset in widely supported LJSpeech directory structure.
|
Python script which takes recordings (filesystem and sqlite db) done with Mycroft Mimic-Recording-Studio (https://github.com/MycroftAI/mimic-recording-studio) and creates an audio optimized dataset in widely supported LJSpeech directory structure.
|
||||||
|
|
||||||
Peter Schmalfeldt (https://github.com/manifestinteractive) did an amazing job as he optimized my originally (quick'n dirty) version of that script, so thank you Peter :-)
|
Peter Schmalfeldt (https://github.com/manifestinteractive) did an amazing job as he optimized my originally (quick'n dirty) version of that script, so thank you Peter :-)
|
||||||
See more details here: https://gist.github.com/manifestinteractive/6fd9be62d0ede934d4e1171e5e751aba#file-mrs2ljspeech-py
|
See more details here: https://gist.github.com/manifestinteractive/6fd9be62d0ede934d4e1171e5e751aba#file-mrs2ljspeech-py
|
||||||
|
|
||||||
|
## Dockerfile.Jetson-Coqui
|
||||||
|
> Add your user to `docker` group to not require sudo on all operations.
|
||||||
|
|
||||||
|
Thanks to NVIDIA for providing docker images for Jetson platform. I use the "machine learning (ML)" image as baseimage for setting up a Coqui environment.
|
||||||
|
|
||||||
|
> You can use any branch or tag as COQUI_BRANCH argument. v0.1.3 is just the current stable version.
|
||||||
|
|
||||||
|
Switch to directory where Dockerfile is in and run `nvidia-docker build . -f Dockerfile.Jetson-Coqui --build-arg COQUI_BRANCH=v0.1.3 -t jetson-coqui` to build your container image. When build process is finished you can start a container on that image.
|
||||||
|
|
||||||
|
|
||||||
|
### Mapped volumes
|
||||||
|
We need to bring your dataset and configuration file into our container so we should map a volume on running container
|
||||||
|
`nvidia-docker run -p 8888:8888 -d --shm-size 32g --gpus all -v [host path with dataset and config.json]:/coqui/TTS/data jetson-coqui`. Now we have a running container ready for Coqui TTS magic.
|
||||||
|
|
||||||
|
### Jupyter notebook
|
||||||
|
Coqui provides lots of useful Jupyter notebooks for dataset analysis. Once your container is up and running you should be able to call
|
||||||
|
|
||||||
|
### Running bash into container
|
||||||
|
`nvidia-docker exec -it jetson-coqui /bin/bash` now you're inside the container and an `ls /coqui/TTS/data` should show your dataset files.
|
Loading…
Reference in New Issue
Block a user