Neuphonic has launched NeuTTS Air, an open-source text-to-speech (TTS) speech language mannequin designed to run regionally in actual time on CPUs. The Hugging Face mannequin card lists 748M parameters (Qwen2 structure) and ships in GGUF quantizations (This fall/Q8), enabling inference by means of llama.cpp
/llama-cpp-python
with out cloud dependencies. It’s licensed beneath Apache-2.0 and features a runnable demo and examples.
So, what’s new?
NeuTTS Air {couples} a 0.5B-class Qwen spine with Neuphonic’s NeuCodec audio codec. Neuphonic positions the system as a “super-realistic, on-device” TTS LM that clones a voice from ~3 seconds of reference audio and synthesizes speech in that model, focusing on voice brokers and privacy-sensitive purposes. The mannequin card and repository explicitly emphasize real-time CPU era and small-footprint deployment.
Key Options
- Realism at sub-1B scale: Human-like prosody and timbre preservation for a ~0.7B (Qwen2-class) text-to-speech LM.
- On-device deployment: Distributed in GGUF (This fall/Q8) with CPU-first paths; appropriate for laptops, telephones, and Raspberry Pi-class boards.
- Instantaneous speaker cloning: Fashion switch from ~3 seconds of reference audio (reference WAV + transcript).
- Compact LM+codec stack: Qwen 0.5B spine paired with NeuCodec (0.8 kbps / 24 kHz) to stability latency, footprint, and output high quality.
Clarify the mannequin structure and runtime path?
- Spine: Qwen 0.5B used as a light-weight LM to situation speech era; the hosted artifact is reported as 748M params beneath the qwen2 structure on Hugging Face.
- Codec: NeuCodec offers low-bitrate acoustic tokenization/decoding; it targets 0.8 kbps with 24 kHz output, enabling compact representations for environment friendly on-device use.
- Quantization & format: Prebuilt GGUF backbones (This fall/Q8) can be found; the repo consists of directions for
llama-cpp-python
and an elective ONNX decoder path. - Dependencies: Makes use of
espeak
for phonemization; examples and a Jupyter pocket book are offered for end-to-end synthesis.
On-device efficiency focus
NeuTTS Air showcases ‘real-time era on mid-range units‘ and affords CPU-first defaults; GGUF quantization is meant for laptops and single-board computer systems. Whereas no fps/RTF numbers are revealed on the cardboard, the distribution targets native inference and not using a GPU and demonstrates a working stream by means of the offered examples and House.
Voice cloning workflow
NeuTTS Air requires (1) a reference WAV and (2) the transcript textual content for that reference. It encodes the reference to model tokens after which synthesizes arbitrary textual content within the reference speaker’s timbre. The Neuphonic workforce recommends 3–15 s clear, mono audio and offers pre-encoded samples.
Privateness, duty, and watermarking
Neuphonic frames the mannequin for on-device privateness (no audio/textual content leaves the machine with out person’s approval) and notes that each one generated audio features a Perth (Perceptual Threshold) watermarker to help accountable use and provenance.
The way it compares?
Open, native TTS methods exist (e.g., GGUF-based pipelines), however NeuTTS Air is notable for packaging a small LM + neural codec with prompt cloning, CPU-first quantizations, and watermarking beneath a permissive license. The “world’s first super-realistic, on-device speech LM” phrasing is the seller’s declare; the verifiable details are the dimension, codecs, cloning process, license, and offered runtimes.
The main target is on system trade-offs: a ~0.7B Qwen-class spine with GGUF quantization paired with NeuCodec at 0.8 kbps/24 kHz is a realistic recipe for real-time, CPU-only TTS that preserves timbre utilizing ~3–15 s model references whereas preserving latency and reminiscence predictable. The Apache-2.0 licensing and built-in watermarking are deployment-friendly, however publishing RTF/latency on commodity CPUs and cloning-quality vs. reference-length curves would allow rigorous benchmarking towards current native pipelines. Operationally, an offline path with minimal dependencies (eSpeak, llama.cpp/ONNX) lowers privateness/compliance threat for edge brokers with out sacrificing intelligibility.
Try the Mannequin Card on Hugging Face and GitHub Web page. Be at liberty to take a look at our GitHub Web page for Tutorials, Codes and Notebooks. Additionally, be happy to comply with us on Twitter and don’t neglect to hitch our 100k+ ML SubReddit and Subscribe to our E-newsletter. Wait! are you on telegram? now you’ll be able to be a part of us on telegram as effectively.