LectureLens: AI Study Planner with Next.js + Supabase
LectureLens converts raw lecture material (PDF slides, notes, transcripts) into structured topics, concise summaries, flashcards, interactive Q&A, and a milestone‑driven study schedule. This post documents the active MVP build phase — schema + RLS are codified, processing pipeline scaffold exists, and upcoming milestones focus on auth UX, topic navigation, spaced repetition, and semantic search.
Problem
Students collect PDFs, slides, and notes—but translating them into a daily plan is time-consuming. Keeping momentum across readings, practice, and spaced repetition often falls through the cracks.
What LectureLens Does
- Summaries: Distills long-form material into digestible sections
- Flashcards: Generates Q&A cards to drill concepts
- Interactive Q&A: Ask follow-ups on any topic
- Study Plan: Builds a schedule with milestones and reviews
High-Level Architecture
The app uses Next.js (App Router) for routing and UI, with Supabase for authentication, storage, and a Postgres-backed data layer. Content processing is orchestrated server-side for reliability.
- Next.js: UI, routing, server actions
- Supabase: Auth, Postgres, and row-level security rules
- TypeScript: End-to-end types for safer feature work
- Edge Functions: Deterministic processing stages with service role isolation
- pgvector (planned): Semantic retrieval for RAG Q&A
AI Processing Pipeline
Internally the pipeline treats each uploaded document as a sequence of processing stages designed to preserve semantic structure while limiting token usage:
- Text Extraction: Segment PDF text into logical blocks (headings, paragraphs, lists).
- Chunking & Normalization: Group blocks under a token budget with cleanup (hyphen join, heading dedupe).
- Summarization Pass: Summarize each chunk; consolidation merges overlap and enforces terminology.
- Flashcard Generation (planned): Prompt templates classify facts/processes/edge cases, dedupe similar cards.
- Q&A Mode (planned): Embed chunks, retrieve top‑N, inject into answer synthesis prompt.
- Schedule Synthesis (planned): Use summary difficulty + card density to allocate milestone focus windows.
Token & Performance Constraints
- Chunking bounds latency variance + cost.
- Consolidation reduces summary drift & duplication.
- Dedupe prevents redundant glossary‑style flashcards.
Challenges & Mitigations
- PDF Noise: Normalization & rejoining heuristics.
- Duplicate Content: Fuzzy similarity scoring pre‑generation.
- Large Lectures: Incremental hierarchical summarization tree.
- Hallucinations: Narrow retrieval (top‑N chunk context) before answer synthesis.
- Scheduling Bias: Weight milestones by normalized concept density.
Roadmap Status (WIP)
Current build cycle; implemented vs upcoming milestones.
Implemented
- Schema: lectures, chunks, topics
- RLS policies (migrations)
- Edge function scaffold
- Upload flow prototype
Next / In Progress
- Auth UI
- Processing trigger + status badges
- Lecture detail + topic navigation
- Flashcards & study mode
- Practice Q&A + quiz mode
- Semantic search & RAG
- Study schedules & reminders
Milestones
- M1: Auth + processing integration
- M2: Lecture detail + topic UX
- M3: Flashcards & study mode
- M4: Practice Q&A generation
- M5: Semantic search (vector + hybrid)
- M6: Study schedules & tracking
- M7: Deployment & analytics
Post‑MVP: ICS export, adaptive spaced repetition, shared study sets, semantic rerank refinement.
Feedback or ideas? Reach out — I’d love to hear them.
Note: Roadmap sections are aspirational until shipped; page updates track milestone delivery.