stt-whisper.cpp-cpu-api-py/whisper.cpp-1.5.2/examples/whisper.swiftui
Kar 4d20931ecc init 2025-06-17 15:53:01 +05:30
..
whisper.cpp.swift init 2025-06-17 15:53:01 +05:30
whisper.swiftui.demo init 2025-06-17 15:53:01 +05:30
whisper.swiftui.xcodeproj init 2025-06-17 15:53:01 +05:30
README.md init 2025-06-17 15:53:01 +05:30

README.md

A sample SwiftUI app using whisper.cpp to do voice-to-text transcriptions. See also: whisper.objc.

Usage:

  1. Select a model from the whisper.cpp repository.1
  2. Add the model to whisper.swiftui.demo/Resources/models via Xcode.
  3. Select a sample audio file (for example, jfk.wav).
  4. Add the sample audio file to whisper.swiftui.demo/Resources/samples via Xcode.
  5. Select the "Release" 2 build configuration under "Run", then deploy and run to your device.

Note: Pay attention to the folder path: whisper.swiftui.demo/Resources/models is the appropriate directory to place resources whilst whisper.swiftui.demo/Models is related to actual code.

image


  1. I recommend the tiny, base or small models for running on an iOS device. ↩︎

  2. The Release build can boost performance of transcription. In this project, it also added -O3 -DNDEBUG to Other C Flags, but adding flags to app proj is not ideal in real world (applies to all C/C++ files), consider splitting xcodeproj in workspace in your own project. ↩︎