iscaas/AFOSR-HAR-2021-2025
A Multimodal Attention-Based Deep Learning Framework For Real-Time Activity Recognition At The Edge
What's novel
A Multimodal Attention-Based Deep Learning Framework For Real-Time Activity Recognition At The Edge
Code Analysis
0 files read · 3 roundsThis repository is a fragmented collection of unrelated deep learning research projects (video action recognition, federated learning, object detection) rather than a unified implementation of the multi-modal human activity recognition framework described in the root README.
Strengths
Contains standard, well-structured PyTorch implementations for specific tasks like 3D CNN knowledge distillation and video classification using established libraries (apex, mmdet).
Weaknesses
The project structure is a 'dump' of multiple repositories with no cohesive architecture; the README claims a specific multi-modal fusion system that is not implemented in the codebase, leading to a severe mismatch between documentation and reality.
Score Breakdown
Signal breakdown
Innovation
Craft
Traction
Scope
Evidence
Commits
26
Contributors
1
Files
3800
Active weeks
8
Repository
Language
Python
Stars
3
Forks
1
License
—