FunctionGemma Mobile Actions β€” LiteRT-LM Export

LiteRT-LM (TensorFlow Lite) format of the merged FunctionGemma mobile-actions model.
Use this for on-device / edge inference with the ai-edge-litert runtime.

What's inside

  • mobile-actions_q8_ekv1024.litertlm β€” quantized LiteRT-LM model (Q8, kv_cache=1024)
  • base_llm_metadata.textproto β€” metadata with BOS/EOS token IDs

Base model: dousery/functiongemma-mobile-actions (merged LoRA + base).

Intended use

  • Function-calling for mobile actions (create calendar events, emails, contacts, maps, Wi‑Fi, flashlight, etc.)
  • On-device / edge scenarios where a LiteRT-LM (.litertlm) is needed.

How to use (Python, ai-edge-litert)

pip install ai-edge-litert-nightly ai-edge-torch-nightly import litert from pathlib import Path

model_path = Path("mobile-actions_q8_ekv1024.litertlm")

Load LiteRT-LM

interp = litert.Interpreter.from_file(model_path)

Conversion details

  • Source: dousery/functiongemma-mobile-actions (merged)
  • Export: converter.convert_to_litert via ai_edge_torch.generative.utilities
  • Quantization: quantize="dynamic_int8"
  • KV cache: kv_cache_max_len=1024
  • Prefill seq len: 256
  • Export layout: kv_cache.KV_LAYOUT_TRANSPOSED
  • Tokenizer model sourced from unsloth/functiongemma-270m-it (SentencePiece)

Citation

@misc{functiongemma-mobile-actions-litertlm,
  title={FunctionGemma Mobile Actions β€” LiteRT-LM Export},
  author={dousery},
  year={2025},
  howpublished={\url{https://huggingface.co/dousery/functiongemma-mobile-actions-litertlm}}
}
Downloads last month
23
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for dousery/functiongemma-mobile-actions-litertlm