Skip to content

lingarobotics/LGC-Concept-AI

Repository files navigation

LGC Concept AI 🎓🤖

Learning Concepts. Writing Exams. Building Confidence.

LGC Concept AI Home

Home interface showing mode-based learning architecture and navigation.

LGC Concept AI is an Anna University–oriented AI learning assistant built to help engineering students understand theory deeply, structure answers correctly, and write confident 13-mark responses in the expected university exam format.

The system prioritizes concept clarity, exam relevance, and mental confidence, making it especially effective for slow learners, first-generation engineers, and theory-heavy subjects.


🚀 Project Vision

Many engineering students struggle not because concepts are impossible, but because explanations are:

  • Too abstract
  • Not exam-oriented
  • Poorly structured for Anna University evaluation

LGC Concept AI solves this gap by behaving like a patient senior or tutor who explains concepts:

  • Clearly
  • Step-by-step
  • In a marks-aware, exam-ready structure

The goal is not just answer generation, but learning that survives exam pressure.


Major Version 2 — Mode-Based Learning Architecture

Version 2 established learning as situational and intentional, not generic.

Learning is separated by intent:

  • Depth
  • Speed
  • Precision
  • Articulation

✨ Learning Modes

🔹 Learn Mode (Primary)

  • Fully exam-oriented
  • Strict Anna University scope enforcement
  • Structured answers suitable for 13-mark questions
  • Aspect-aware answering:
    • Definition
    • Construction
    • Working
    • Comparison
    • Applications
    • Advantages / Limitations

Core Points / Mental Model Extraction

After a full explanation, students can extract 5–7 memory-friendly core points to reinforce understanding and revision.


🔹 Fast Learn Mode

  • Designed for quick clarity and last-minute revision
  • Provides key takeaways only
  • No long explanations
  • No exam structuring
  • No analogies

⚠️ Not suitable for coding or deep derivations
👉 Users are guided to Learn Mode when depth is required


🔹 Clear Doubt Mode

  • Designed for micro-clarifications
  • Answers only the specific doubt
  • Short, direct, and focused
  • Avoids re-teaching the entire topic

Perfect when:

  • You mostly understand the concept
  • You’re stuck at one small point
  • You need clarity without overload

🔹 Verify Understanding (Teach-Back Mode — Core Philosophy)

One of the strongest pillars of LGC Concept AI is learning by explaining.

In Teach-Back Mode:

  1. The student explains a concept in their own words
  2. The AI:
    • Encourages first
    • Checks conceptual correctness
    • Identifies missing logic
    • Points out mistakes briefly
    • Motivates the student to retry

“If you can explain it clearly, you understand it.”

This mode verifies real understanding, not memorization.


🧠 Answer Structure (Anna University Preferred)

In Learn Mode, responses follow a marks-aware structure:

  • Definition (≈2 marks)
  • Construction / Components (≈3 marks)
  • Working Principle (≈4–5 marks)
  • Applications
  • Advantages & Limitations
  • One clearly marked analogy (not for exam writing)

Answers strictly match what is asked —
nothing extra, nothing missing.


🔁 Learning Experience Design

  • Scroll-based long answers
  • Previous responses remain visible
  • Continuous learning flow (not form-based)
  • Reduced cognitive load and exam anxiety
  • Structured transition into learning modes

⚙️ Infrastructure & Reliability (v2.2+)

Version 2.2 introduced a major backend stability upgrade.

LGC Concept AI no longer depends on a single AI provider.

🔁 Unified Multi-Model Fallback Stack

Primary → Secondary → Tertiary → Stable fallback:

  • Llama 3.3 70B
  • Nemotron 30B
  • Gemma 27B
  • Gemini 2.5 Flash

If one provider is rate-limited or overloaded, the system automatically:

  • Retries on 429
  • Applies exponential backoff
  • Falls back to the next model
  • Returns a graceful 503 only if all fail

This ensures controlled degradation instead of system collapse.


⏱ Timeout & Failure Handling

  • Per-request timeout using AbortController
  • Structured logging with request-level tracing
  • Provider abstraction (OpenRouter + Gemini)
  • Clear failure reporting

The goal is reliability, not dependency fragility.


🚀 Global Launch Transition

Entering Learn Mode triggers a controlled launch transition:

  • App-level overlay
  • Animated loading feedback
  • Structured navigation flow

This prevents abrupt context switching and improves UX coherence.


🏗️ Tech Stack

Frontend

  • React + Vite
  • Clean, distraction-free UI
  • Mobile-friendly layout
  • Mode-isolated rendering logic

Backend

  • Node.js + Express
  • Mode-based routing
  • Prompt isolation per learning mode
  • Multi-provider AI abstraction layer
  • Structured logging & retry control

🤖 AI Strategy (Mode-Isolated Behavior)

LGC Concept AI prevents mode bleeding by isolating prompts and intent.

Each mode is constrained deliberately.

Learn Mode

  • Full exam-oriented explanations
  • Aspect-aware answering
  • Strict Anna University scope control
  • Core point extraction

Fast Learn Mode

  • Key takeaways only
  • No deep structuring
  • No analogies
  • No long expansions

Clear Doubt Mode

  • Micro clarification only
  • No re-teaching

Teach-Back Mode

  • Encourages first
  • Evaluates logic
  • Identifies missing reasoning
  • Motivates retry

🔐 Privacy & Cost Philosophy

  • No forced subscriptions
  • No hidden monetization
  • Minimal data storage
  • Lightweight and sustainable architecture
  • Free-model prioritized with intelligent fallback

Learning needs investment in time and consistency, not money.


🎯 Target Audience

  • Anna University engineering students
  • Slow learners struggling with theory
  • Students who understand concepts but panic in exams
  • Learners who want clarity over shortcuts

📖 Future Enhancements (Planned)

  • Mathematical rendering engine (v2.3)
  • Structured markdown + equation formatting
  • Reflection prompts (“What did I correct?”)
  • Non-gamified learning streaks
  • Subject-wise structuring
  • Offline revision mode
  • Conversational chat-like UI

🙏 Acknowledgements

  • Anna University exam pattern & evaluation style
  • Open learning communities
  • OpenRouter API
  • Google Gemini API (fallback inference provider)
  • Open models ecosystem
  • Brevo API (transactional email delivery)

LGC Concept AI is not a shortcut tool.

It is a structured learning companion designed to build
clarity, discipline, and confidence.


📊 Development Activity

This project has been built through consistent iteration, debugging, and system-level refinement over multiple months.

Commit Activity Timeline

Commit Activity

Shows consistent weekly contributions and active development phases.

Code Frequency Analysis

Code Frequency

Highlights real engineering work — additions, refactoring, and system restructuring over time.


📘 Development Documentation

Detailed development history, including:

  • system evolution (v1 → v2.3)
  • UI redesign and mode isolation
  • debugging and rendering issues
  • backend restructuring and fallback architecture

is available in:

👉 /docs


Ownership

This software system is an original product developed under LGC Systems.

The ideation, system architecture, design, and overall product vision originate from
Ramalingam Jayavelu, Founder of LGC Systems.

All intellectual property, including system architecture, design, implementation, and source code contained in this repository, belongs exclusively to Ramalingam Jayavelu under LGC Systems.

This repository does not transfer ownership or rights to any external organization, institution, company, or third party.

The software is maintained and governed solely under the LGC Systems initiative.

About

LGC Concept AI is an Anna University–focused learning assistant that explains 13-mark engineering concepts using real-world analogies in strict AU exam format. Designed for all learners, it delivers clear definitions, construction, working, applications, and limitations.

Topics

Resources

License

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Contributors