Projects
Explore our portfolio of innovative projects that apply modern web development technologies to solve real-world problems. Each project demonstrates practical implementations of full-stack development, from system architecture to user experience, showcasing comprehensive skills in building production-ready applications.
Ajrasakha Chatbot
Summary: Multilingual agricultural chatbot that connects farmers with verified knowledge through a smart answer retrieval system
Project Overview
Ajrasakha Chatbot is a farmer-friendly chat interface where agricultural workers can ask questions in their own language and receive reliable answers instantly. The system follows a smart three-tier approach to find the best answer: it first searches the Golden Dataset (a collection of expert-verified question-answer pairs), then checks the Package of Practices (PoP) database for standard agricultural guidelines, and finally uses advanced AI language models if neither source has the answer. This ensures farmers always get helpful responses, whether from verified expert knowledge or AI-generated guidance.
Key Features
Farmers can ask questions in their native language, making the platform accessible to agricultural workers across different regions. The chat interface is simple and intuitive, designed for users who may not be tech-savvy. Questions are answered through a smart prioritization system: verified answers from the Golden Dataset appear first, followed by standard practices from the Package of Practices, and finally AI-generated responses when needed.
The platform supports multiple regional languages through the Sarvam AI API, ensuring farmers can communicate naturally in their preferred language. Voice input is available through speech-to-text capabilities, allowing farmers to ask questions by speaking rather than typing. All conversations are saved, so farmers can revisit previous answers anytime.
Real-time chat delivery ensures farmers receive answers instantly, whether from the knowledge base or AI models. The system is built on LibreChat technology, providing a reliable and modern chat experience similar to popular AI assistants but tailored specifically for agricultural needs.
Technologies Used
The chat interface is built with React and TypeScript for a smooth, responsive user experience. The backend uses Node.js and Express.js to handle chat messages and route queries efficiently. LibreChat provides the foundation for the chat interface, offering a modern and reliable messaging platform.
For AI capabilities, the system integrates DeepSeek-R1, Qwen3, and GPT-OSS language models through Ollama. MongoDB Atlas stores the Golden Dataset and Package of Practices, using vector search to find semantically similar questions. The Sarvam AI API handles regional language translation, enabling farmers to communicate in their native languages.
Authentication is handled through Firebase, ensuring secure access for farmers. The system uses Model Context Protocol (MCP) servers to access structured agricultural data efficiently.
How It Works
When a farmer asks a question, the system first searches the Golden Dataset for verified answers from agricultural experts. If a matching answer exists, it’s delivered instantly. If not found, the system checks the Package of Practices database for standard agricultural guidelines and best practices relevant to the question.
When neither the Golden Dataset nor Package of Practices contain the answer, the system uses AI language models to generate a helpful response based on agricultural knowledge. This AI-generated answer is sent to the farmer’s chat so they receive immediate guidance. Simultaneously, the question and AI-generated answer are forwarded to the Ajrasakha Reviewer System for expert validation.
This three-tier approach ensures farmers always receive timely answers while maintaining quality through expert review. Over time, as more questions are reviewed and approved, the Golden Dataset grows, meaning more farmers receive instantly verified answers instead of AI-generated ones.
Benefits for Farmers
Farmers get instant answers to their agricultural questions without language barriers, as the system supports multiple regional languages. They can ask questions via text or voice, making it accessible even to those who struggle with typing. The chat interface is simple and familiar, similar to popular messaging apps.
By prioritizing verified expert knowledge from the Golden Dataset, farmers receive trusted information first. When AI generates answers, those responses still undergo expert review through the Ajrasakha Reviewer System, ensuring quality improves continuously. All conversations are saved, allowing farmers to reference previous answers whenever needed.
The platform is available anytime, providing 24/7 access to agricultural guidance regardless of expert availability. This helps farmers make timely decisions, especially during critical farming periods when immediate answers are essential.
GitHub Repository
Ajrasakha Reviewer System
Summary: AI-powered agricultural advisory platform with expert knowledge base and multi-level review system
Project Overview
The Ajrasakha Reviewer System is the quality control backbone that ensures farmers receive accurate, expert-verified information. When the chatbot uses AI language models to answer a farmer’s question (because the answer wasn’t found in the Golden Dataset or Package of Practices), that question and AI-generated answer are automatically sent to this reviewer system. Here, real agricultural experts review the question, evaluate the AI’s response, and provide their expert opinion through multiple review phases. Once the review process is complete and experts approve the answer, it’s added to the Golden Dataset, making it available as a verified answer for future farmers who ask similar questions.
Key Features
The reviewer system receives questions that were answered by AI models in the chatbot. These questions go through multiple review phases where different agricultural experts independently evaluate both the question and the AI-generated answer. Each expert can approve, modify, or reject the answer, adding their professional insights and corrections.
Experts are tracked through a reputation system that rewards quality contributions. The system monitors each expert’s approval rates, response times, and answer quality. Experts earn incentive points for thorough reviews, while penalties apply for rushed or low-quality evaluations. This gamification encourages experts to provide careful, thoughtful reviews.
After the final review phase, approved question-answer pairs are automatically added to the Golden Dataset. The analytics dashboard shows how the Golden Dataset grows over time, tracks expert contributions, monitors question review status, and identifies common agricultural topics that farmers ask about. This helps administrators understand farmer needs and optimize the knowledge base.
Technologies Used
The backend uses TypeScript, Express.js, Node.js, MongoDB Atlas with Vector Search, InversifyJS for dependency injection, Firebase Authentication, and Sentry for error monitoring. The frontend is built with React, TypeScript, TanStack Router, TanStack Query, Shadcn UI, and Tailwind CSS.
For AI/ML, the platform uses DeepSeek-R1 (70B), Qwen3 (1.7B), and GPT-OSS (20B) LLMs via Ollama, HuggingFace BAAI/bge-large-en-v1.5 for embeddings, MongoDB Atlas as the vector database, and MCP servers for structured data access. Additional services include MCP tools for Golden Dataset and FAQ access, audio processing for voice queries, and Sarvam AI API for regional languages.
System Architecture
When a farmer’s question is answered by the AI language model in the chatbot (meaning it wasn’t found in Golden Dataset or Package of Practices), the system automatically creates a review task. This task contains the original question, the AI-generated answer, and relevant context like the farmer’s state and crop information.
The review task moves through multiple phases, with different agricultural experts assigned to each phase. Experts can view the question, see what the AI answered, and provide their assessment. They can approve the answer as-is, suggest modifications, or provide a completely new expert answer. After all review phases are complete, moderators perform a final quality check.
Once approved, the question-answer pair is added to the Golden Dataset with metadata including the expert who reviewed it, verification status, and agricultural domain (crop type, state, season, etc.). Future farmers asking similar questions will now receive this expert-verified answer instantly from the chatbot.
The platform serves three user types: Agricultural Experts (review AI-generated answers, earn reputation points), Moderators (perform final approval, ensure quality standards), and Admins (manage the review workflow, monitor expert performance, oversee system operations).
Data Structure
The Golden Dataset stores question-answer pairs with embeddings, metadata (state, crop, season, domain, specialist, sources), similarity scores, and verification status. Expert performance tracking includes reputation scores, incentive tracking, penalty tracking, and response quality metrics.
Project Goals
The Reviewer System ensures that AI-generated answers are validated by real agricultural experts before becoming part of the permanent knowledge base. This creates a growing library of expert-verified information that makes the chatbot smarter over time. As more questions are reviewed and approved, more farmers receive instant verified answers instead of AI-generated ones.
The multi-phase review process ensures quality through multiple expert opinions, reducing the chance of incorrect information reaching farmers. By tracking expert performance and providing incentives, the system motivates agricultural specialists to contribute their knowledge. The reputation system recognizes top contributors and maintains high review standards.
Ultimately, the Reviewer System builds a comprehensive, state and crop-specific agricultural knowledge base that reflects real expert wisdom. It helps identify which topics farmers ask about most frequently, guiding future content creation and expert recruitment. This continuous improvement cycle ensures the platform becomes more valuable to farmers every day.
GitHub Repository
ViBe
Summary: AI-proctored video-based Learning Management System
Project Overview
ViBe is a comprehensive Learning Management System revolutionizing education through video-based and multimedia content delivery. The platform integrates AI-powered proctoring to ensure academic integrity while providing an engaging, interactive learning environment. ViBe combines online learning flexibility with security and monitoring features necessary for assessments and examinations.
Key Features
ViBe delivers rich multimedia content through interactive video modules forming the core learning experience. AI proctoring uses facial recognition and behavior analysis to automatically monitor exams, ensuring integrity without human proctors. The course management system handles everything from course creation and student enrollment to detailed progress tracking.
Assessment tools cover the full spectrum—quizzes, assignments, and examinations—with automated grading providing immediate feedback. Progress analytics give students and instructors detailed visibility into performance and engagement patterns. During examinations, AI-monitored testing environments actively prevent cheating. Students access everything through a personalized dashboard visualizing their progress. Instructors have tools for content creation, student management, and grading integrated into a single streamlined interface.
Technologies Used
Frontend: React, TypeScript. Backend: Express.js, Node.js. Database: MongoDB. AI Proctoring: Computer vision and deep learning models detecting unusual eye movements, multiple faces, and prohibited materials.
Project Goals
ViBe builds a full-featured LMS with a video-first approach, recognizing video as the most engaging way to deliver educational material. Implementing reliable AI proctoring addresses a major online education challenge—maintaining exam integrity without human proctors. The platform creates intuitive interfaces ensuring technology enhances rather than hinders learning. Scalable video content delivery handles growing numbers of students and courses without performance degradation. Comprehensive analytics provide actionable insights into learning progress. Maintaining academic integrity through automated monitoring ensures certifications and grades carry real weight and credibility.
GitHub Repository
Upcoming Features
Filter Users by Role
View Feature Request #605 →
Profile Picture Edit Option
View Feature Request #596 →
Forgot Password / Remember Password
View Feature Request #594 →
Password Visibility Toggle
View Feature Request #591 →
Return to Video from Quiz
View Feature Request #561 →
Engagement Games
View Feature Request #545 →
Detect and Prevent Microphone Manipulation
View Feature Request #608 →
Detect Background Blur / Virtual Background Usage
View Feature Request #607 →
Spandan
Summary: Live class interaction platform with AI-powered question generation
Project Overview
Spandan transforms live teaching sessions into engaging learning experiences. The system captures teacher speech in real-time, converts it into transcripts, automatically generates relevant questions from the content, and presents them to students through a dedicated portal for instant interaction and assessment.
Key Features
Spandan captures teacher speech in real-time and converts it into accurate transcripts. AI automatically generates relevant questions from these transcripts, creating instant engagement opportunities. Students access questions through a dedicated portal to submit answers during or after class. The platform enables true live interaction, making classes dynamic and participatory. Everything is tracked—questions generated and student responses—giving teachers a complete view through their dashboard to monitor engagement and identify areas needing clarification.
Technologies Used
Frontend: React, TypeScript. Backend: Express.js, Node.js. Database: MongoDB. Speech Recognition: OpenAI Whisper, ASR. Question Generation: Open source LLM models.
Project Goals
Spandan enables seamless real-time classroom interaction, transforming passive lectures into active learning. Automating question generation removes the burden from teachers to constantly create engagement prompts while teaching. The platform provides instant engagement and feedback, allowing teachers to gauge understanding immediately rather than waiting for formal assessments. Tracking participation throughout sessions helps identify engaged students and those needing support. Creating accessible transcripts serves multiple purposes—students can review material, those who missed class can catch up, and teachers can reflect on their teaching.
GitHub Repository
Upcoming Features
Teacher Dashboard for Student Performance Monitoring
View Feature Request #22 →
Student Dashboard (Performance & Achievement Overview)
View Feature Request #21 →
Point Allocation & Time-Aware Scoring
View Feature Request #20 →
Achievement System (Badges & Rewards)
View Feature Request #19 →
Manual Question Generation (Host-Controlled Override)
View Feature Request #18 →
Automatic Question Generation (Real-Time, Interval-Based)
View Feature Request #17 →
Cohost Feature (Poll Room Collaboration)
View Feature Request #16 →
DDD
Summary: Dopamine Driven Dashboard - Integrated performance and engagement analytics across all projects
Project Overview
DDD is a comprehensive performance and engagement dashboard that provides real-time insights into user activity, progress, and achievements across the entire platform. Rather than being standalone, DDD integrates into all other projects (ViBe, Spandan, Peer Evaluation), providing unified analytics that motivates users through gamification and visual feedback.
Key Features
DDD delivers real-time performance tracking with interactive charts and data visualizations. Users earn achievements, build streaks, and progress through levels as they engage with platform features. Progress indicators and milestones show users how far they’ve come and what they’re working toward. The system provides personalized insights tailored to each user’s activity patterns, along with motivational feedback and rewards that sustain engagement.
Technologies Used
Built with React and TypeScript on the frontend, Express.js and Node.js on the backend, and MongoDB for data storage. Chart.js and D3.js power the interactive visualizations.
Integration Approach
DDD embeds as a modular component within each project. In ViBe, it tracks video engagement, completion rates, and learning patterns. In Spandan, it monitors class participation, question responses, and interaction frequency. For Peer Evaluation, it analyzes evaluation activity, feedback quality, and review completeness.
Project Goals
DDD creates a unified dashboard bringing together all user activities across platforms. Through engaging gamification mechanics, it keeps users motivated while providing actionable performance insights. The system enables cross-project analytics, allowing users and administrators to see patterns spanning multiple applications.
GitHub Repositories
DDD, ViBe(DDD), Spandan(DDD), Peer Evaluation(DDD)
Upcoming Features
Teacher Dashboard for Student Performance Monitoring (Spandan)
View Feature Request #22 →
Student Dashboard - Performance & Achievement Overview (Spandan)
View Feature Request #21 →
Module-wise Dashboard (ViBe)
View Feature Request #538 →
Quiz-wise Dashboard (ViBe)
View Feature Request #537 →
Section-wise Dashboard (ViBe)
View Feature Request #529 →
Peer Evaluation
Summary: Multi-level peer assessment system with automated anomaly detection and escalation workflow
Project Overview
Peer Evaluation is an intelligent assessment system facilitating peer-to-peer evaluation with built-in quality control. The system assigns unique IDs to each student per quiz, ensuring anonymity and security. QR codes are generated only when teachers use bulk download for later PDF uploads. When students upload directly via portal, unique IDs generate automatically without QR codes. Algorithm-based anomaly detection flags issues to Teaching Assistants, who can escalate to teachers, maintaining academic integrity throughout.
Key Features
The platform automatically generates unique IDs for anonymous evaluation. Teachers choose between bulk download with QR codes for offline distribution or direct student portal uploads with auto-generated IDs. Students evaluate peers anonymously while algorithm-based detection identifies suspicious patterns and statistical outliers. The multi-level escalation workflow moves from peer reviews to automated detection to TA review to teacher intervention.
TAs review flagged submissions through their dashboard, while teachers handle final escalation for complex cases. Rubric-based evaluation ensures consistency. The platform provides comprehensive analytics on evaluation patterns, manages structured feedback with quality controls, and uses statistical analysis for fair rating aggregation. Complete audit trails track the entire process.
Technologies Used
Frontend: React, Javascript. Backend: Express.js, Node.js. Database: MongoDB. QR code libraries for unique code generation. Statistical outlier detection algorithms (standard deviation analysis, etc.) for anomaly detection. Authentication: JWT/OAuth.
Evaluation Workflow
Bulk Download Path
Teacher creates assessment with rubrics, downloads unique IDs with QR codes, distributes printed assessments, then scans and uploads PDFs. System links PDFs to student IDs via QR codes. Students evaluate assigned peer submissions anonymously.
Student Portal Upload Path
Teacher creates assessment with rubrics. Students upload directly from portal. System auto-generates unique IDs without QR codes. Students evaluate assigned peer submissions anonymously.
Common Workflow
System runs algorithm-based analysis for anomalies. Flagged cases go to TAs for investigation. Complex cases escalate to teachers. Validated evaluations aggregate using statistical methods for fair final marks.
Project Goals
The project implements a fair, transparent multi-level evaluation system ensuring accurate assessment. Anonymity is achieved through unique ID-based identification. The platform supports flexible submission methods (bulk upload or student portal) and automates anomaly detection using statistical algorithms. The efficient escalation workflow ensures issues are handled at appropriate levels. Comprehensive analytics help instructors understand evaluation quality, while multiple checkpoints maintain academic integrity. The system provides actionable insights for continuous improvement.
GitHub Repository
Vi-SlideS
Summary: AI-powered adaptive classroom platform that tailors teaching based on student questions and cognitive analysis
Project Overview
Vi-SlideS revolutionizes teaching by making it question-driven and adaptive. After a brief 5-10 minute topic introduction, students submit questions that shape class direction. AI analyzes collective questions, providing teachers with real-time insights into class mood, motivation, and conceptual understanding. AI automatically addresses straightforward questions, allowing teachers to focus on complex queries requiring deeper explanation and personalized attention.
Key Features
Students submit questions after the topic introduction through a real-time interface. They can choose anonymous or identified submissions and track question status to see which AI answered versus teacher-addressed. AI detects overall class mood by examining sentiment and tone, assesses motivation levels, classifies questions by cognitive level, extracts main themes and concerns, and identifies learning gaps.
The system performs smart triage to determine auto-answerable questions versus those needing teacher attention. Straightforward questions get instant AI responses with relevant sources. Complex questions route to the teacher’s prioritized dashboard. Teachers see real-time question overviews, class insights (mood, motivation, knowledge gaps), AI-prioritized questions, and suggested teaching direction. They can review and override AI answers as needed.
Technologies Used
Current Testing Phase
Google Forms for question collection, Google Sheets for data aggregation and analysis, Google Slides for presentation and visualization.
Planned Stack
Frontend: React, TypeScript. Backend: Express.js, Node.js. Database: MongoDB. AI/NLP: LLMs for question analysis and response generation. Sentiment analysis: NLP models for mood and motivation detection. Classification models for cognitive complexity assessment. Real-time communication: WebSockets.
System Workflow
Pre-class: Teacher presents 5-10 minute overview; students submit questions. AI Analysis: System collects questions, classifies by complexity, analyzes sentiment for mood/motivation, creates gist of understanding and concerns. Response: Straightforward questions get instant AI responses; complex questions route to prioritized teacher dashboard; teacher uses insights to guide discussion dynamically. Post-class: System generates comprehensive analytics report; questions/answers archived for future reference.
Question Classification
AI auto-answers factual or definition-based queries, previously answered similar questions, and low cognitive complexity clarifications. Teachers address questions with high conceptual depth, novel perspectives, complex problem-solving scenarios, personalized explanation needs, and critical thinking queries.
Project Goals
Vi-SlideS transforms traditional lectures into question-driven, adaptive learning experiences where student curiosity shapes class direction. The platform provides real-time insights into understanding and engagement, reducing teacher burden through automated responses and enabling focus on complex discussions. The responsive teaching approach improves engagement as students see their questions influencing class content. Identifying learning gaps early helps address problems before they become ingrained. The platform creates data-driven teaching strategies based on question analysis, fostering more interactive and student-centered classrooms.
GitHub Repository
Upcoming Features
Teacher and Student Authentication
View Feature Request #1 →
Teacher Session Creation with Unique Code
View Feature Request #2 →
Student Join Session Using Session Code
View Feature Request #3 →
Student Question Submission Interface
View Feature Request #4 →
Slides-Based Teacher View for Submitted Questions
View Feature Request #5 →
Session Status Control (Start, Pause, and End Session)
View Feature Request #6 →
Minimal Session Summary with Class Mood Gist for Teachers
View Feature Request #7 →
AI-Based Question Analysis and Auto-Responses
View Feature Request #8 →
Vi-Notes
Summary: Authenticity verification platform that ensures genuine human writing through keyboard activity monitoring and statistical signature analysis
Project Overview
Vi-Notes verifies human-written content authenticity by monitoring writing in real-time. Users write freely while the platform captures keyboard activity patterns, typing rhythm, editing behaviors, and statistical signatures that distinguish genuine human composition from AI-generated or AI-assisted text.
Key Features
The platform provides a distraction-free writing interface with silent background monitoring capturing typing speed, pause patterns, deletions, and composition rhythm. The keyboard monitoring tracks micro-patterns like hesitations before sentences, corrections during idea refinement, and variable typing speeds that reflect natural thinking processes.
Statistical analysis examines linguistic patterns, vocabulary diversity, sentence structure variations, and stylistic consistency. AI-generated text exhibits regularities like excessive consistency in sentence length or uniform vocabulary distribution that mismatch keyboard activity. After each session, Vi-Notes generates authenticity reports with confidence scores, suspicious patterns, and supporting evidence. Users share these reports to prove authorship in academic, professional, or publishing contexts. Real-time feedback flags unusual patterns like pasted text chunks or non-human typing behaviors.
Technologies Used
Frontend: React, TypeScript, Electron for native desktop apps accessing keyboard events. Backend: Node.js, Express.js. OS-level APIs capture keystroke timing and patterns. ML models (TensorFlow, PyTorch) use supervised learning on human vs. AI text and unsupervised anomaly detection. NLP models analyze statistical signatures. MongoDB stores sessions, keystroke data, and reports with privacy-preserving encryption.
Detection Methods
The system identifies behavioral patterns difficult for AI to replicate: natural sentence pauses, real-time corrections, variable typing speed based on cognitive load, and micro-pauses at punctuation. These signatures compare against known human patterns.
Text analysis reveals origin clues through variation in sentence length and structure, idiosyncratic vocabulary choices, and correlation between writing complexity and revision frequency. Human writers revise complex sections more than simple ones; AI-pasted text shows no such correlation.
Cross-referencing behavioral data with textual patterns provides the strongest verification. Paragraphs appearing without keystroke data, suspiciously constant typing speeds, or mismatched statistical signatures flag potential AI involvement.
Project Goals
Vi-Notes restores trust in written content through reliable authorship verification, helping educators ensure academic integrity and enabling writers to prove authenticity. The platform develops accurate detection algorithms distinguishing human writing from various AI assistance levels while maintaining strict privacy protection. The system adapts continuously as AI writing tools evolve, staying ahead of new techniques attempting to mimic human writing patterns.
GitHub Repository
Upcoming Features
Basic Writing Editor
View Feature Request #1 →
User Login and Registration
View Feature Request #2 →
Capture Keystroke Timing
View Feature Request #3 →
Detect Pasted Text
View Feature Request #4 →
Save Writing Session Data
View Feature Request #5 →