Offline Translation

GitHub

Description

Offline Translation is an iOS app that translates text from photos without using cloud services. It enables users—especially travelers or people in low-connectivity areas—to take or upload a picture, automatically detect the language, and receive a translation in their chosen language.

Use Case

Ideal for travelers who need to translate road signs, menus, or business signs in areas with poor or no internet connectivity.

Market Comparison

Apple and Google both offer offline translation apps. These rely on downloading "language kits" for each supported language pair. Offline Translation evaluates whether performance can be improved with alternative local models or architectures.

Commercial Feasibility

1/5 – This app is not commercially competitive with Apple/Google offerings and is primarily built for technical exploration.

Technical Goals

Architectures Compared

The app will be developed in three versions with the same UI:

  1. Apple-native pipeline: CoreML OCR, language detection, and translation APIs.
  2. Hybrid pipeline: CoreML + third-party local models (e.g., MLKit) to enhance each step.
  3. End-to-end vision-language model: A single on-device model to go directly from image to translation.

Design Doc

Design Document for Offline Translation

Evaluation Methodology

Results & Learnings

(In progress)

Next Steps

UI Updates

Model Updates