IdeaCredIdeaCred

christian-oleary/AutoML-Python-Benchmark

67

Benchmarks of AutoML Frameworks

What's novel

Benchmarks of AutoML Frameworks

Code Analysis

0 files read · 4 rounds

A benchmark project that claims to evaluate AutoML libraries and static code analysis tools but contains no actual implementation logic, only configuration files and placeholder paths.

Strengths

Well-organized directory structure with clear separation between archived forecasting code, active AutoML implementations, and utility scripts. Includes comprehensive dependency management via pyproject.toml and DVC for data versioning.

Weaknesses

No actual source code implementation was found—only configuration files and paths marked as '[Path outside repo]'. Lacks any meaningful tests, error handling, or core algorithmic logic. README claims about ML pipelines and static analysis are unsupported by actual code.

Score Breakdown

Innovation
3 (25%)
Craft
68 (35%)
Traction
11 (15%)
Scope
67 (25%)

Signal breakdown

Innovation

Not Fork+1
Code Novelty+0
Concept Novelty+1

Craft

Ci+5
Tests+8
Polish+1
Releases+3
Has License+5
Code Quality+9
Readme Quality+15
Recent Activity+7
Structure Quality+5
Commit Consistency+5
Has Dependency Mgmt+5

Traction

Forks+0
Stars+6
Hn Points+0
Watchers+3
Early Traction+0
Devto Reactions+0
Community Contribs+2

Scope

Commits+7
Languages+8
Subsystems+10
Bloat Penalty+0
Completeness+7
Contributors+6
Authored Files+15
Readme Code Match+3
Architecture Depth+7
Implementation Depth+8

Evidence

Commits

25

Contributors

2

Files

110

Active weeks

13

TestsCI/CDREADMELicenseContributing

Repository

Language

Python

Stars

3

Forks

0

License

MIT