微软开源的 SpeechT5 语音模型,主要包括以下功能
- 语音转文字:用于自动语音识别(ASR)。
- 文字转语音:用于合成音频(TTS)。
- 语音转语音:用于不同声音之间的转换或进行语音增强。
T5 网络包括,Encoder、Decoder、PreNet 和 PostNet。根据不同的模型,选用不同的 PreNet 和 PostNet。
TTS
T5 实现 TTS
from transformers import SpeechT5Processor, SpeechT5ForTextToSpeech, SpeechT5HifiGan
from datasets import load_dataset
import torch
import soundfile as sf
from datasets import load_datasetinputs = processor(text="hello, where are you from hello, where are you fromhello, where are you fromhello, where are you fromhello, where are you from?", return_tensors="pt")# load xvector containing speaker's voice characteristics from a dataset
embeddings_dataset = load_dataset("Matthijs/cmu-arctic-xvectors", split="validation")
speaker_embeddings = torch.tensor(embeddings_dataset[1234]["xvector"]).unsqueeze(0)speech = model.generate_speech(inputs["input_ids"], speaker_embeddings, vocoder=vocoder)sf.write("speech.wav", speech.numpy(), samplerate=16000)from IPython.display import AudioAudio("./speech.wav")
ASR
T5 实现 ASR
import torch
import soundfile as sf
from transformers import SpeechT5Processor, SpeechT5ForSpeechToText# Load the SpeechT5 processor and model
processor = SpeechT5Processor.from_pretrained("microsoft/speecht5_asr")
model = SpeechT5ForSpeechToText.from_pretrained("microsoft/speecht5_asr")def transcribe_audio(file_path):# Load audio filespeech, sampling_rate = sf.read(file_path)# Ensure the audio is in the right formatif sampling_rate != 16000:raise ValueError("The model expects 16kHz audio sampling rate")# Preprocess the audio for the modelinputs = processor(audio=speech, sampling_rate=sampling_rate, return_tensors="pt")predicted_ids = model.generate(**inputs, max_length=100)# Decode the logits to texttranscription = processor.batch_decode(predicted_ids, skip_special_tokens=True)return transcription[0]# Example usage
file_path = "speech.wav" # Replace with your file path
transcription = transcribe_audio(file_path)
print("Transcription:", transcription)
音频处理
音频是由 Wav 存储的,Wav 包括采样频率和深度进行存储,在音频数据传入模型之前,数据通过 Mel-spectrogram 进行特征抽取。可以通过以下代码生成 Mel-spectrogram。
import librosa
import librosa.display
import matplotlib.pyplot as plt
import numpy as np# Load an example audio file
audio_file_path = 'speech.wav'
y, sr = librosa.load(audio_file_path, sr=16000)# Compute Mel-spectrogram
mel_spectrogram = librosa.feature.melspectrogram(y=y, sr=sr, n_mels=128, fmax=8000)# Convert to log scale (dB)
log_mel_spectrogram = librosa.power_to_db(mel_spectrogram, ref=np.max)# Plot the Mel-spectrogram
plt.figure(figsize=(10, 4))
librosa.display.specshow(log_mel_spectrogram, sr=sr, x_axis='time', y_axis='mel')
plt.colorbar(format='%+2.0f dB')
plt.title('Mel-spectrogram')
plt.tight_layout()
plt.show()
每个像素作为数据输入 Model。
总结
SpeechT5 是一个比较强大的模型,可以文字转音频或者音频转文字,SpeechT5 目前只支持英文。