The original attempt on Kaggle won't run because of an issue with CuDNN, but this notebook runs fine on Colab.

Preparation

!pip install condacolab
Collecting condacolab
  Downloading https://files.pythonhosted.org/packages/ee/47/6f9fe13087c31aba889c4b09f9beaa558bf216bf9108c9ccef44e6c9dcfe/condacolab-0.1.2-py3-none-any.whl
Installing collected packages: condacolab
Successfully installed condacolab-0.1.2
import condacolab
condacolab.install()
⏬ Downloading https://github.com/jaimergp/miniforge/releases/latest/download/Mambaforge-colab-Linux-x86_64.sh...
📦 Installing...
📌 Adjusting configuration...
🩹 Patching environment...
⏲ Done in 0:00:36
🔁 Restarting kernel...
%%capture
!conda install -c pykaldi pykaldi -y
!git clone https://github.com/jimregan/fairseq/ --branch issue3581
Cloning into 'fairseq'...
remote: Enumerating objects: 28296, done.
remote: Total 28296 (delta 0), reused 0 (delta 0), pack-reused 28296
Receiving objects: 100% (28296/28296), 11.77 MiB | 24.69 MiB/s, done.
Resolving deltas: 100% (21286/21286), done.
!git clone https://github.com/kpu/kenlm
Cloning into 'kenlm'...
remote: Enumerating objects: 13824, done.
remote: Counting objects: 100% (137/137), done.
remote: Compressing objects: 100% (79/79), done.
remote: Total 13824 (delta 76), reused 92 (delta 45), pack-reused 13687
Receiving objects: 100% (13824/13824), 5.49 MiB | 20.76 MiB/s, done.
Resolving deltas: 100% (7956/7956), done.
%%capture
!apt-get -y install libeigen3-dev liblzma-dev zlib1g-dev libbz2-dev
%%capture
%cd /content/kenlm
!python setup.py install
%cd /tmp
import os
os.environ['PATH'] = f"{os.environ['PATH']}:/content/kenlm/build/bin/"
os.environ['FAIRSEQ_ROOT'] = '/content/fairseq'
%cd /content/fairseq/
/content/fairseq
%%capture
!python setup.py install
os.environ['HYDRA_FULL_ERROR'] = '1'
%%capture
!pip install editdistance
%%capture
!pip install kaggle
from google.colab import files

uploaded = files.upload()

for fn in uploaded.keys():
  print('User uploaded file "{name}" with length {length} bytes'.format(
      name=fn, length=len(uploaded[fn])))
  
# Then move kaggle.json into the folder where the API expects to find it.
!mkdir -p ~/.kaggle/ && mv kaggle.json ~/.kaggle/ && chmod 600 ~/.kaggle/kaggle.json
Upload widget is only available when the cell has been executed in the current browser session. Please rerun this cell to enable.
Saving kaggle.json to kaggle.json
User uploaded file "kaggle.json" with length 64 bytes
%cd /content
/content
!kaggle datasets download "jimregan/w2vu-cvsv-prepared-text"
Downloading w2vu-cvsv-prepared-text.zip to /content
 75% 13.0M/17.4M [00:00<00:00, 55.1MB/s]
100% 17.4M/17.4M [00:00<00:00, 64.5MB/s]
%%capture
!unzip /content/w2vu-cvsv-prepared-text.zip
!kaggle datasets download -d jimregan/w2vu-cvsv-precompute-pca512-cls128-mean-pooled
Downloading w2vu-cvsv-precompute-pca512-cls128-mean-pooled.zip to /content
 98% 386M/394M [00:04<00:00, 90.1MB/s]
100% 394M/394M [00:04<00:00, 102MB/s] 
%%capture
!unzip w2vu-cvsv-precompute-pca512-cls128-mean-pooled.zip
!rm *.zip

GAN

import torch
torch.version.cuda
'10.1'
torch.backends.cudnn.version()
7603
%cd /content/fairseq
/content/fairseq
from google.colab import drive
drive.mount('/content/drive')
%%writefile rungan.sh
PREFIX=w2v_unsup_gan_xp
TASK_DATA=/content/precompute_pca512_cls128_mean_pooled
TEXT_DATA=/content/preppedtext/phones/
KENLM_PATH=/content/preppedtext/phones/lm.phones.filtered.04.bin

PREFIX=$PREFIX CUDA_LAUNCH_BLOCKING=1 fairseq-hydra-train \
	-m --config-dir fairseq/config/model/wav2vecu/gan \
	--config-name w2vu \
	task.data=${TASK_DATA} \
	task.text_data=${TEXT_DATA} \
	task.kenlm_path=${KENLM_PATH} \
	checkpoint.no_epoch_checkpoints=true \
	checkpoint.save_dir=/content/drive/MyDrive/w2vu \
	'common.seed=range(0,5)'
Writing rungan.sh
!bash rungan.sh
[2021-06-04 00:06:14,189][fairseq.tasks.unpaired_audio_text][INFO] - REF: ɛ n f œ ʂ ə n a d ɵ ʂ ə k t f øː r d eː t s ɔ m h ɛ n d ə p oː ɕ œ r k ɔ n s ɛ t ə n
[2021-06-04 00:06:14,192][fairseq.tasks.unpaired_audio_text][INFO] - HYP: oː b iː ʃ œ m ɕ m œ ɕ ɪ ɵ ɕ ɵ m ɵ s ɵ uː ɵ s ɵ ɛ ʂ a tː sx
[2021-06-04 00:06:14,198][fairseq.tasks.unpaired_audio_text][INFO] - LM [REF]: -53.44462585449219, 0.05339602260269112
[2021-06-04 00:06:14,198][fairseq.tasks.unpaired_audio_text][INFO] - LM [HYP]: -61.104984283447266, 0.006571721232914821
[2021-06-04 00:06:14,844][valid][INFO] - {"epoch": 8, "valid_loss": "0.93", "valid_ntokens": "3039.79", "valid_nsentences": "144.214", "valid_lm_score_sum": "-71760.8", "valid_num_pred_chars": "28972", "valid_vocab_seen_pct": "0.949477", "valid_uer": "92.9812", "valid_weighted_lm_ppl": "229.386", "valid_lm_ppl": "206.793", "valid_wps": "15426", "valid_wpb": "3039.8", "valid_bsz": "144.2", "valid_num_updates": "128", "valid_best_weighted_lm_ppl": "189.002"}
[2021-06-04 00:06:14,846][fairseq.checkpoint_utils][INFO] - Preparing to save checkpoint for epoch 8 @ 128 updates
[2021-06-04 00:06:14,847][fairseq.trainer][INFO] - Saving checkpoint to /content/drive/MyDrive/w2vu/checkpoint8.pt
[2021-06-04 00:06:14,911][fairseq.trainer][INFO] - Finished saving checkpoint to /content/drive/MyDrive/w2vu/checkpoint8.pt
[2021-06-04 00:06:14,974][fairseq.checkpoint_utils][INFO] - Saved checkpoint /content/drive/MyDrive/w2vu/checkpoint8.pt (epoch 8 @ 128 updates, score 229.38563413598007) (writing took 0.12713056299980963 seconds)