MacBook pro commited on
Commit
476a3d9
·
1 Parent(s): 408ba7c

Runtime ONNX downloads: wire model_downloader at startup+initialize; show reference frame until animator ready; client shows reference_ack; update README with flags

Browse files
Files changed (2) hide show
  1. README.md +22 -0
  2. original_fastapi_app.py +18 -0
README.md CHANGED
@@ -197,6 +197,28 @@ If the Space shows a perpetual "Restarting" badge:
197
 
198
  If problems persist, capture the Container log stack trace and open an issue.
199
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
200
  ## Model Weights (Planned Voice Pipeline)
201
  The codebase now contains placeholder directories for upcoming audio feature extraction and conversion models.
202
 
 
197
 
198
  If problems persist, capture the Container log stack trace and open an issue.
199
 
200
+ ## Enable ONNX Model Downloads (Safe LivePortrait)
201
+
202
+ To pull LivePortrait ONNX files into the container at runtime and enable the safe animation path:
203
+
204
+ 1) Set these Space secrets/variables in the Settings → Variables panel:
205
+
206
+ - `MIRAGE_ENABLE_SCRFD=1` (already default in Dockerfile)
207
+ - `MIRAGE_ENABLE_LIVEPORTRAIT=1`
208
+ - `MIRAGE_DOWNLOAD_MODELS=1`
209
+ - `MIRAGE_LP_APPEARANCE_URL=https://huggingface.co/myn0908/Live-Portrait-ONNX/resolve/main/appearance_feature_extractor.onnx`
210
+ - `MIRAGE_LP_MOTION_URL=https://huggingface.co/myn0908/Live-Portrait-ONNX/resolve/main/motion_extractor.onnx` (optional)
211
+
212
+ 2) Restart the Space. The server will download models in the background on startup, and also sync once when you hit "Initialize AI Pipeline".
213
+
214
+ 3) Check `/pipeline_status` or the in-UI metrics to see:
215
+ - `ai_pipeline.animator_available: true`
216
+ - `ai_pipeline.reference_set: true` (after uploading a reference)
217
+
218
+ Notes:
219
+ - The safe loader uses onnxruntime-gpu if available, otherwise CPU. This path provides a visible transformation placeholder and validates end-to-end integration.
220
+ - Keep model URLs only to assets you have permission to download.
221
+
222
  ## Model Weights (Planned Voice Pipeline)
223
  The codebase now contains placeholder directories for upcoming audio feature extraction and conversion models.
224
 
original_fastapi_app.py CHANGED
@@ -15,6 +15,10 @@ from metrics import metrics as _metrics_singleton, Metrics
15
  from config import config
16
  from voice_processor import voice_processor
17
  from avatar_pipeline import get_pipeline
 
 
 
 
18
 
19
  app = FastAPI(title="Mirage Real-time AI Avatar System")
20
 
@@ -92,6 +96,13 @@ async def initialize_pipeline():
92
  return {"status": "already_initialized", "message": "Pipeline already loaded"}
93
 
94
  try:
 
 
 
 
 
 
 
95
  success = await pipeline.initialize()
96
  if success:
97
  pipeline_initialized = True
@@ -303,6 +314,13 @@ async def log_config():
303
  "gpu_name": gpu_name,
304
  }
305
  print("[startup]", startup_line)
 
 
 
 
 
 
 
306
 
307
 
308
  # Note: The Dockerfile / README launch with: uvicorn app:app --port 7860
 
15
  from config import config
16
  from voice_processor import voice_processor
17
  from avatar_pipeline import get_pipeline
18
+ try:
19
+ import model_downloader # optional runtime downloader
20
+ except Exception:
21
+ model_downloader = None
22
 
23
  app = FastAPI(title="Mirage Real-time AI Avatar System")
24
 
 
96
  return {"status": "already_initialized", "message": "Pipeline already loaded"}
97
 
98
  try:
99
+ # Best-effort: download models first if enabled via env
100
+ if model_downloader is not None:
101
+ try:
102
+ loop = asyncio.get_running_loop()
103
+ await loop.run_in_executor(None, model_downloader.maybe_download)
104
+ except Exception:
105
+ pass
106
  success = await pipeline.initialize()
107
  if success:
108
  pipeline_initialized = True
 
314
  "gpu_name": gpu_name,
315
  }
316
  print("[startup]", startup_line)
317
+ # Kick off non-blocking model download in background (optional)
318
+ if model_downloader is not None:
319
+ try:
320
+ loop = asyncio.get_running_loop()
321
+ loop.run_in_executor(None, model_downloader.maybe_download)
322
+ except Exception:
323
+ pass
324
 
325
 
326
  # Note: The Dockerfile / README launch with: uvicorn app:app --port 7860