Submitted!
You can make changes until the deadline (Aug 6, 2025 at 7:59 PM EDT).
Emergency assessment and medical assistance with local AI processing and intelligent report generation for first responders
Google - The Gemma 3n Impact Challenge
Hackathon Writeup · Aug 4, 2025



A cutting-edge emergency response and medical assistance application that combines AI-powered real-time video communication with intelligent assessment tools. Built with React Native and Expo, featuring local AI processing with cloud fallback for optimal performance and privacy.
The Citizen2Responder App is designed to assist emergency responders, medical professionals, and individuals during critical situations. It provides AI-guided assessments, automated report generation, and real-time care instructions through an intuitive video calling interface.
📺 Watch Demo Video
See the app in action - AI-powered emergency response tools, real-time assessment, and care instructions

Scan the QR code with Expo Go or use the link below
🚀 Live Demo on Expo
Experience the app instantly using the Expo Go app on your mobile device. Scan the QR code or open the link above to test all features including the AI assessment tools, emergency reporting, and care instructions.
⚠️ Framework Limitation: While Gemma-3n supports multimodal capabilities, llama.rn currently does not support vision and audio processing. This is a limitation of the mobile framework, not the underlying model. Therefore, we route vision and audio tasks to cloud-based models to provide complete functionality.
The app intelligently routes requests based on capability requirements:
finetuning/ directoryOur medical training dataset was developed through real EMT field experience, featuring 40+ emergency scenarios across critical categories:

The app features three primary access modes accessible through intuitive toggle controls, plus vision capabilities:
Assess Mode - Initial emergency assessment with toggle controls
Report Mode - Emergency report generation with incident details
Care Mode - Pre-care instructions with step-by-step guidance
The app includes AI-powered vision analysis for visual assessment support.
git clone <repository-url>
cd relay-responder-app
npm install
Configure Local AI Model
⚠️ IMPORTANT: Configure the local model path in your .env file
Model Requirements:
.env fileEnvironment Setup
Create a .env file with required API keys and model configuration:
EXPO_PUBLIC_OPENROUTER_API_KEY=your_openrouter_api_key_here
EXPO_PUBLIC_MODEL_PATH=/path/to/your/models/gemma-3n-E2B-it-Q4_K_M.gguf
EXPO_PUBLIC_DEEPGRAM_PUBLIC_KEY=your_deepgram_api_key_here
EXPO_PUBLIC_REPLICATE_API_KEY=your_replicate_api_key_here
npm run prebuild
npm run start:dev
# iOS
npm run ios:dev
# Android
npm run android:dev
# iOS
npm run build:ios
# Android
npm run build:android
npm run start # Standard Expo start
npm run start:dev # Development client start
npm run android:dev # Android debug build
npm run ios:dev # iOS debug build
npm run build:android # Android production build
npm run build:ios # iOS production build
npm run prebuild # Clean prebuild
npm run lint # Code linting
⚠️ Emergency Disclaimer: This application is designed to assist in emergency situations but should not replace professional medical advice or emergency services. Always contact appropriate emergency services (911, etc.) for immediate medical emergencies.
This Writeup has been released under the Attribution 4.0 International (CC BY 4.0) license.
Earl Potters. Citizen2Responder: AI-Powered Emergency Response App. https://www.kaggle.com/competitions/google-gemma-3n-hackathon/writeups/citizen2responder-ai-powered-emergency-response-ap. 2025. Kaggle