Explore our portfolio of innovative projects that apply modern web development technologies to solve real-world problems. Each project demonstrates practical implementations of full-stack development, from system architecture to user experience, showcasing comprehensive skills in building production-ready applications.

Ajrasakha Chatbot

Summary: Multilingual agricultural chatbot that connects farmers with verified knowledge through a smart answer retrieval system

Project Overview

Ajrasakha Chatbot is a farmer-friendly chat interface where agricultural workers can ask questions in their own language and receive reliable answers instantly. The system follows a smart three-tier approach to find the best answer: it first searches the Golden Dataset (a collection of expert-verified question-answer pairs), then checks the Package of Practices (PoP) database for standard agricultural guidelines, and finally uses advanced AI language models if neither source has the answer. This ensures farmers always get helpful responses, whether from verified expert knowledge or AI-generated guidance.

Key Features

Farmers can ask questions in their native language, making the platform accessible to agricultural workers across different regions. The chat interface is simple and intuitive, designed for users who may not be tech-savvy. Questions are answered through a smart prioritization system: verified answers from the Golden Dataset appear first, followed by standard practices from the Package of Practices, and finally AI-generated responses when needed.

The platform supports multiple regional languages through the Sarvam AI API, ensuring farmers can communicate naturally in their preferred language. Voice input is available through speech-to-text capabilities, allowing farmers to ask questions by speaking rather than typing. All conversations are saved, so farmers can revisit previous answers anytime.

Real-time chat delivery ensures farmers receive answers instantly, whether from the knowledge base or AI models. The system is built on LibreChat technology, providing a reliable and modern chat experience similar to popular AI assistants but tailored specifically for agricultural needs.

Technologies Used

The chat interface is built with React and TypeScript for a smooth, responsive user experience. The backend uses Node.js and Express.js to handle chat messages and route queries efficiently. LibreChat provides the foundation for the chat interface, offering a modern and reliable messaging platform.

For AI capabilities, the system integrates DeepSeek-R1, Qwen3, and GPT-OSS language models through Ollama. MongoDB Atlas stores the Golden Dataset and Package of Practices, using vector search to find semantically similar questions. The Sarvam AI API handles regional language translation, enabling farmers to communicate in their native languages.

Authentication is handled through Firebase, ensuring secure access for farmers. The system uses Model Context Protocol (MCP) servers to access structured agricultural data efficiently.

How It Works

When a farmer asks a question, the system first searches the Golden Dataset for verified answers from agricultural experts. If a matching answer exists, it’s delivered instantly. If not found, the system checks the Package of Practices database for standard agricultural guidelines and best practices relevant to the question.

When neither the Golden Dataset nor Package of Practices contain the answer, the system uses AI language models to generate a helpful response based on agricultural knowledge. This AI-generated answer is sent to the farmer’s chat so they receive immediate guidance. Simultaneously, the question and AI-generated answer are forwarded to the Ajrasakha Reviewer System for expert validation.

This three-tier approach ensures farmers always receive timely answers while maintaining quality through expert review. Over time, as more questions are reviewed and approved, the Golden Dataset grows, meaning more farmers receive instantly verified answers instead of AI-generated ones.

Benefits for Farmers

Farmers get instant answers to their agricultural questions without language barriers, as the system supports multiple regional languages. They can ask questions via text or voice, making it accessible even to those who struggle with typing. The chat interface is simple and familiar, similar to popular messaging apps.

By prioritizing verified expert knowledge from the Golden Dataset, farmers receive trusted information first. When AI generates answers, those responses still undergo expert review through the Ajrasakha Reviewer System, ensuring quality improves continuously. All conversations are saved, allowing farmers to reference previous answers whenever needed.

The platform is available anytime, providing 24/7 access to agricultural guidance regardless of expert availability. This helps farmers make timely decisions, especially during critical farming periods when immediate answers are essential.

GitHub Repository

Ajrasakha Chatbot

Ajrasakha Reviewer System

Summary: AI-powered agricultural advisory platform with expert knowledge base and multi-level review system

Project Overview

The Ajrasakha Reviewer System is the quality control backbone that ensures farmers receive accurate, expert-verified information. When the chatbot uses AI language models to answer a farmer’s question (because the answer wasn’t found in the Golden Dataset or Package of Practices), that question and AI-generated answer are automatically sent to this reviewer system. Here, real agricultural experts review the question, evaluate the AI’s response, and provide their expert opinion through multiple review phases. Once the review process is complete and experts approve the answer, it’s added to the Golden Dataset, making it available as a verified answer for future farmers who ask similar questions.

Key Features

The reviewer system receives questions that were answered by AI models in the chatbot. These questions go through multiple review phases where different agricultural experts independently evaluate both the question and the AI-generated answer. Each expert can approve, modify, or reject the answer, adding their professional insights and corrections.

Experts are tracked through a reputation system that rewards quality contributions. The system monitors each expert’s approval rates, response times, and answer quality. Experts earn incentive points for thorough reviews, while penalties apply for rushed or low-quality evaluations. This gamification encourages experts to provide careful, thoughtful reviews.

After the final review phase, approved question-answer pairs are automatically added to the Golden Dataset. The analytics dashboard shows how the Golden Dataset grows over time, tracks expert contributions, monitors question review status, and identifies common agricultural topics that farmers ask about. This helps administrators understand farmer needs and optimize the knowledge base.

Technologies Used

The backend uses TypeScript, Express.js, Node.js, MongoDB Atlas with Vector Search, InversifyJS for dependency injection, Firebase Authentication, and Sentry for error monitoring. The frontend is built with React, TypeScript, TanStack Router, TanStack Query, Shadcn UI, and Tailwind CSS.

For AI/ML, the platform uses DeepSeek-R1 (70B), Qwen3 (1.7B), and GPT-OSS (20B) LLMs via Ollama, HuggingFace BAAI/bge-large-en-v1.5 for embeddings, MongoDB Atlas as the vector database, and MCP servers for structured data access. Additional services include MCP tools for Golden Dataset and FAQ access, audio processing for voice queries, and Sarvam AI API for regional languages.

System Architecture

When a farmer’s question is answered by the AI language model in the chatbot (meaning it wasn’t found in Golden Dataset or Package of Practices), the system automatically creates a review task. This task contains the original question, the AI-generated answer, and relevant context like the farmer’s state and crop information.

The review task moves through multiple phases, with different agricultural experts assigned to each phase. Experts can view the question, see what the AI answered, and provide their assessment. They can approve the answer as-is, suggest modifications, or provide a completely new expert answer. After all review phases are complete, moderators perform a final quality check.

Once approved, the question-answer pair is added to the Golden Dataset with metadata including the expert who reviewed it, verification status, and agricultural domain (crop type, state, season, etc.). Future farmers asking similar questions will now receive this expert-verified answer instantly from the chatbot.

The platform serves three user types: Agricultural Experts (review AI-generated answers, earn reputation points), Moderators (perform final approval, ensure quality standards), and Admins (manage the review workflow, monitor expert performance, oversee system operations).

Data Structure

The Golden Dataset stores question-answer pairs with embeddings, metadata (state, crop, season, domain, specialist, sources), similarity scores, and verification status. Expert performance tracking includes reputation scores, incentive tracking, penalty tracking, and response quality metrics.

Project Goals

The Reviewer System ensures that AI-generated answers are validated by real agricultural experts before becoming part of the permanent knowledge base. This creates a growing library of expert-verified information that makes the chatbot smarter over time. As more questions are reviewed and approved, more farmers receive instant verified answers instead of AI-generated ones.

The multi-phase review process ensures quality through multiple expert opinions, reducing the chance of incorrect information reaching farmers. By tracking expert performance and providing incentives, the system motivates agricultural specialists to contribute their knowledge. The reputation system recognizes top contributors and maintains high review standards.

Ultimately, the Reviewer System builds a comprehensive, state and crop-specific agricultural knowledge base that reflects real expert wisdom. It helps identify which topics farmers ask about most frequently, guiding future content creation and expert recruitment. This continuous improvement cycle ensures the platform becomes more valuable to farmers every day.

GitHub Repository

Ajrasakha Reviewer System

ViBe

Summary: AI-proctored video-based Learning Management System

Project Overview

ViBe is a comprehensive Learning Management System revolutionizing education through video-based and multimedia content delivery. The platform integrates AI-powered proctoring to ensure academic integrity while providing an engaging, interactive learning environment. ViBe combines online learning flexibility with security and monitoring features necessary for assessments and examinations.

Key Features

ViBe delivers rich multimedia content through interactive video modules forming the core learning experience. AI proctoring uses facial recognition and behavior analysis to automatically monitor exams, ensuring integrity without human proctors. The course management system handles everything from course creation and student enrollment to detailed progress tracking.

Assessment tools cover the full spectrum—quizzes, assignments, and examinations—with automated grading providing immediate feedback. Progress analytics give students and instructors detailed visibility into performance and engagement patterns. During examinations, AI-monitored testing environments actively prevent cheating. Students access everything through a personalized dashboard visualizing their progress. Instructors have tools for content creation, student management, and grading integrated into a single streamlined interface.

Technologies Used

Frontend: React, TypeScript. Backend: Express.js, Node.js. Database: MongoDB. AI Proctoring: Computer vision and deep learning models detecting unusual eye movements, multiple faces, and prohibited materials.

Project Goals

ViBe builds a full-featured LMS with a video-first approach, recognizing video as the most engaging way to deliver educational material. Implementing reliable AI proctoring addresses a major online education challenge—maintaining exam integrity without human proctors. The platform creates intuitive interfaces ensuring technology enhances rather than hinders learning. Scalable video content delivery handles growing numbers of students and courses without performance degradation. Comprehensive analytics provide actionable insights into learning progress. Maintaining academic integrity through automated monitoring ensures certifications and grades carry real weight and credibility.

GitHub Repository

ViBe

Upcoming Features

 Filter Users by Role
Filter students and instructors by their roles for easier management and bulk actions. This feature enables teachers to quickly identify and take actions on specific user groups.

View Feature Request #605 →
 Profile Picture Edit Option
Enable users to edit and update their profile pictures directly from the profile page, enhancing personalization and user experience.

View Feature Request #596 →
 Forgot Password / Remember Password
Implement password recovery functionality and "remember me" option to improve authentication convenience and security.

View Feature Request #594 →
 Password Visibility Toggle
Add a toggle button allowing users to show or hide their password while typing, improving usability during login and signup.

View Feature Request #591 →
 Return to Video from Quiz
Allow students to navigate back to the video from quiz screens, providing flexibility to review content before answering questions.

View Feature Request #561 →
 Engagement Games
Interactive games integrated into the learning experience to boost student engagement and motivation through gamified learning activities.

View Feature Request #545 →
 Detect and Prevent Microphone Manipulation
Detect malicious microphone manipulation such as reducing input to minimum or using external microphones placed far away. System will prompt users with random phrases to repeat, blocking access if they fail to respond and logging incidents for review.

View Feature Request #608 →
 Detect Background Blur / Virtual Background Usage
Identify and block users employing virtual backgrounds or background blur effects during portal access. System logs all detection instances to enable action on repeated violations, ensuring authentic proctoring environments.

View Feature Request #607 →
Spandan

Summary: Live class interaction platform with AI-powered question generation

Project Overview

Spandan transforms live teaching sessions into engaging learning experiences. The system captures teacher speech in real-time, converts it into transcripts, automatically generates relevant questions from the content, and presents them to students through a dedicated portal for instant interaction and assessment.

Key Features

Spandan captures teacher speech in real-time and converts it into accurate transcripts. AI automatically generates relevant questions from these transcripts, creating instant engagement opportunities. Students access questions through a dedicated portal to submit answers during or after class. The platform enables true live interaction, making classes dynamic and participatory. Everything is tracked—questions generated and student responses—giving teachers a complete view through their dashboard to monitor engagement and identify areas needing clarification.

Technologies Used

Frontend: React, TypeScript. Backend: Express.js, Node.js. Database: MongoDB. Speech Recognition: OpenAI Whisper, ASR. Question Generation: Open source LLM models.

Project Goals

Spandan enables seamless real-time classroom interaction, transforming passive lectures into active learning. Automating question generation removes the burden from teachers to constantly create engagement prompts while teaching. The platform provides instant engagement and feedback, allowing teachers to gauge understanding immediately rather than waiting for formal assessments. Tracking participation throughout sessions helps identify engaged students and those needing support. Creating accessible transcripts serves multiple purposes—students can review material, those who missed class can catch up, and teachers can reflect on their teaching.

GitHub Repository

Spandan

Upcoming Features

 Teacher Dashboard for Student Performance Monitoring
Comprehensive teacher dashboard providing real-time visibility into student engagement, performance trends, point distribution, and achievement progress. Features include session overview, student performance views with sorting/filtering, question analytics, points visibility, achievements monitoring, and post-session summaries. The dashboard updates in real-time during live sessions with read-only access for cohosts.

View Feature Request #22 →
 Student Dashboard (Performance & Achievement Overview)
Centralized student dashboard showing performance summary, achievement showcase, question interaction history, and session analytics. Displays total points earned, session-wise breakdowns, accuracy percentages, badges earned, and upcoming achievements. Updates in near real-time during live sessions with full analytics available after completion.

View Feature Request #21 →
 Point Allocation & Time-Aware Scoring
Advanced scoring system with configurable point allocation per question and time-based score reduction. Rewards faster responses while maintaining fairness across all students.

View Feature Request #20 →
 Achievement System (Badges & Rewards)
Student-side achievement system with badges for milestones like first correct answer, winning streaks, fastest responder, and perfect accuracy. Enhances engagement through gamification and visible progress indicators.

View Feature Request #19 →
 Manual Question Generation (Host-Controlled Override)
Allows teachers to manually create and inject questions during live sessions, overriding automatic generation. Provides flexibility for spontaneous assessment and custom question integration.

View Feature Request #18 →
 Automatic Question Generation (Real-Time, Interval-Based)
Enhanced automatic question generation triggered at configurable time intervals during live lectures. Ensures consistent student engagement without manual intervention from teachers.

View Feature Request #17 →
 Cohost Feature (Poll Room Collaboration)
Multi-host functionality enabling poll room collaboration where cohosts can assist with session management, question monitoring, and student interaction. Supports teaching assistants and co-teachers.

View Feature Request #16 →
DDD

Summary: Dopamine Driven Dashboard - Integrated performance and engagement analytics across all projects

Project Overview

DDD is a comprehensive performance and engagement dashboard that provides real-time insights into user activity, progress, and achievements across the entire platform. Rather than being standalone, DDD integrates into all other projects (ViBe, Spandan, Peer Evaluation), providing unified analytics that motivates users through gamification and visual feedback.

Key Features

DDD delivers real-time performance tracking with interactive charts and data visualizations. Users earn achievements, build streaks, and progress through levels as they engage with platform features. Progress indicators and milestones show users how far they’ve come and what they’re working toward. The system provides personalized insights tailored to each user’s activity patterns, along with motivational feedback and rewards that sustain engagement.

Technologies Used

Built with React and TypeScript on the frontend, Express.js and Node.js on the backend, and MongoDB for data storage. Chart.js and D3.js power the interactive visualizations.

Integration Approach

DDD embeds as a modular component within each project. In ViBe, it tracks video engagement, completion rates, and learning patterns. In Spandan, it monitors class participation, question responses, and interaction frequency. For Peer Evaluation, it analyzes evaluation activity, feedback quality, and review completeness.

Project Goals

DDD creates a unified dashboard bringing together all user activities across platforms. Through engaging gamification mechanics, it keeps users motivated while providing actionable performance insights. The system enables cross-project analytics, allowing users and administrators to see patterns spanning multiple applications.

GitHub Repositories

DDD, ViBe(DDD), Spandan(DDD), Peer Evaluation(DDD)

Upcoming Features

 Teacher Dashboard for Student Performance Monitoring (Spandan)
Comprehensive teacher dashboard providing real-time visibility into student engagement, performance trends, point distribution, and achievement progress. Features include session overview, student performance views with sorting/filtering, question analytics, points visibility, achievements monitoring, and post-session summaries. The dashboard updates in real-time during live sessions with read-only access for cohosts.

View Feature Request #22 →
 Student Dashboard - Performance & Achievement Overview (Spandan)
Centralized student dashboard showing performance summary, achievement showcase, question interaction history, and session analytics. Displays total points earned, session-wise breakdowns, accuracy percentages, badges earned, and upcoming achievements. Updates in near real-time during live sessions with full analytics available after completion.

View Feature Request #21 →
 Module-wise Dashboard (ViBe)
Comprehensive teacher dashboard showing analytics and insights organized by individual course modules, enabling granular performance tracking.

View Feature Request #538 →
 Quiz-wise Dashboard (ViBe)
Detailed teacher dashboard displaying quiz-specific analytics including student performance, question difficulty, and completion rates per quiz.

View Feature Request #537 →
 Section-wise Dashboard (ViBe)
Teacher dashboard organized by course sections, providing insights into section-level performance and progress tracking.

View Feature Request #529 →
Peer Evaluation

Summary: Multi-level peer assessment system with automated anomaly detection and escalation workflow

Project Overview

Peer Evaluation is an intelligent assessment system facilitating peer-to-peer evaluation with built-in quality control. The system assigns unique IDs to each student per quiz, ensuring anonymity and security. QR codes are generated only when teachers use bulk download for later PDF uploads. When students upload directly via portal, unique IDs generate automatically without QR codes. Algorithm-based anomaly detection flags issues to Teaching Assistants, who can escalate to teachers, maintaining academic integrity throughout.

Key Features

The platform automatically generates unique IDs for anonymous evaluation. Teachers choose between bulk download with QR codes for offline distribution or direct student portal uploads with auto-generated IDs. Students evaluate peers anonymously while algorithm-based detection identifies suspicious patterns and statistical outliers. The multi-level escalation workflow moves from peer reviews to automated detection to TA review to teacher intervention.

TAs review flagged submissions through their dashboard, while teachers handle final escalation for complex cases. Rubric-based evaluation ensures consistency. The platform provides comprehensive analytics on evaluation patterns, manages structured feedback with quality controls, and uses statistical analysis for fair rating aggregation. Complete audit trails track the entire process.

Technologies Used

Frontend: React, Javascript. Backend: Express.js, Node.js. Database: MongoDB. QR code libraries for unique code generation. Statistical outlier detection algorithms (standard deviation analysis, etc.) for anomaly detection. Authentication: JWT/OAuth.

Evaluation Workflow

Bulk Download Path

Teacher creates assessment with rubrics, downloads unique IDs with QR codes, distributes printed assessments, then scans and uploads PDFs. System links PDFs to student IDs via QR codes. Students evaluate assigned peer submissions anonymously.

Student Portal Upload Path

Teacher creates assessment with rubrics. Students upload directly from portal. System auto-generates unique IDs without QR codes. Students evaluate assigned peer submissions anonymously.

Common Workflow

System runs algorithm-based analysis for anomalies. Flagged cases go to TAs for investigation. Complex cases escalate to teachers. Validated evaluations aggregate using statistical methods for fair final marks.

Project Goals

The project implements a fair, transparent multi-level evaluation system ensuring accurate assessment. Anonymity is achieved through unique ID-based identification. The platform supports flexible submission methods (bulk upload or student portal) and automates anomaly detection using statistical algorithms. The efficient escalation workflow ensures issues are handled at appropriate levels. Comprehensive analytics help instructors understand evaluation quality, while multiple checkpoints maintain academic integrity. The system provides actionable insights for continuous improvement.

GitHub Repository

PES

Vi-SlideS

Summary: AI-powered adaptive classroom platform that tailors teaching based on student questions and cognitive analysis

Project Overview

Vi-SlideS revolutionizes teaching by making it question-driven and adaptive. After a brief 5-10 minute topic introduction, students submit questions that shape class direction. AI analyzes collective questions, providing teachers with real-time insights into class mood, motivation, and conceptual understanding. AI automatically addresses straightforward questions, allowing teachers to focus on complex queries requiring deeper explanation and personalized attention.

Key Features

Students submit questions after the topic introduction through a real-time interface. They can choose anonymous or identified submissions and track question status to see which AI answered versus teacher-addressed. AI detects overall class mood by examining sentiment and tone, assesses motivation levels, classifies questions by cognitive level, extracts main themes and concerns, and identifies learning gaps.

The system performs smart triage to determine auto-answerable questions versus those needing teacher attention. Straightforward questions get instant AI responses with relevant sources. Complex questions route to the teacher’s prioritized dashboard. Teachers see real-time question overviews, class insights (mood, motivation, knowledge gaps), AI-prioritized questions, and suggested teaching direction. They can review and override AI answers as needed.

Technologies Used

Current Testing Phase

Google Forms for question collection, Google Sheets for data aggregation and analysis, Google Slides for presentation and visualization.

Planned Stack

Frontend: React, TypeScript. Backend: Express.js, Node.js. Database: MongoDB. AI/NLP: LLMs for question analysis and response generation. Sentiment analysis: NLP models for mood and motivation detection. Classification models for cognitive complexity assessment. Real-time communication: WebSockets.

System Workflow

Pre-class: Teacher presents 5-10 minute overview; students submit questions. AI Analysis: System collects questions, classifies by complexity, analyzes sentiment for mood/motivation, creates gist of understanding and concerns. Response: Straightforward questions get instant AI responses; complex questions route to prioritized teacher dashboard; teacher uses insights to guide discussion dynamically. Post-class: System generates comprehensive analytics report; questions/answers archived for future reference.

Question Classification

AI auto-answers factual or definition-based queries, previously answered similar questions, and low cognitive complexity clarifications. Teachers address questions with high conceptual depth, novel perspectives, complex problem-solving scenarios, personalized explanation needs, and critical thinking queries.

Project Goals

Vi-SlideS transforms traditional lectures into question-driven, adaptive learning experiences where student curiosity shapes class direction. The platform provides real-time insights into understanding and engagement, reducing teacher burden through automated responses and enabling focus on complex discussions. The responsive teaching approach improves engagement as students see their questions influencing class content. Identifying learning gaps early helps address problems before they become ingrained. The platform creates data-driven teaching strategies based on question analysis, fostering more interactive and student-centered classrooms.

GitHub Repository

Vi-SlideS

Upcoming Features

 Teacher and Student Authentication
Implement Google-based authentication that allows users to sign in using their Google account. During the first login, users should be identified as either a teacher or a student. This role will decide which interface they see after login. The goal is to make access simple while ensuring clear separation between teacher and student functionality.

View Feature Request #1 →
 Teacher Session Creation with Unique Code
Create an option that allows teachers to start a new class session directly from their dashboard. When a session is created, the system should generate a unique session code. This code will represent a specific class or room and will be shared with students, ensuring that all submitted questions are linked to the correct session.

View Feature Request #2 →
 Student Join Session Using Session Code
Add an option on the student interface where students can enter a session code provided by the teacher. Once a valid code is entered, the student should be joined to that session and allowed to submit questions. This ensures that student questions are routed only to the intended class.

View Feature Request #3 →
 Student Question Submission Interface
Create a simple and distraction-free interface that allows students to submit questions after joining a session. The interface should include a text input for the question and a submit button. Each submitted question should be linked to the session and the user, and must be stored so that it can be viewed by the teacher in real-time.

View Feature Request #4 →
 Slides-Based Teacher View for Submitted Questions
Develop an interface on the teacher's side that displays student questions in a slides-style layout. Each question should appear as a separate slide or card, making it easy for the teacher to navigate through questions one by one during the class. New questions should appear in real time, allowing the teacher to move through them sequentially while addressing student doubts. This view will serve as the teacher's primary interface for reviewing and discussing questions during the session.

View Feature Request #5 →
 Session Status Control (Start, Pause, and End Session)
Allow teachers to control the session state by starting, pausing, and ending a session. When a session is paused, students should temporarily be unable to submit new questions, while previously submitted questions remain visible. Once a session is ended, students should no longer be able to submit questions to that session. This control helps teachers manage class flow and ensures questions remain tied to the correct class timeframe.

View Feature Request #6 →
 Minimal Session Summary with Class Mood Gist for Teachers
After a session is ended by the teacher, provide a dedicated summary view for that session. This view should display basic information such as the session name or code, the total number of questions submitted, and the session duration. Additionally, include a summary of the students' mood generated from the overall tone of the submitted questions, such as a brief textual indication of engagement or confusion. The goal is to give teachers a quick reflection on student participation and overall class mood without introducing detailed analytics or complex visualisations.

View Feature Request #7 →
 AI-Based Question Analysis and Auto-Responses
Add an AI system that analyses student questions during a session. Simple or factual questions should receive automatic AI-generated answers, while complex or conceptual questions should be routed to the teacher. Each question should clearly indicate whether it was answered by AI or requires teacher attention. The goal is to reduce teacher workload while maintaining meaningful, human-led discussions.

View Feature Request #8 →
Vi-Notes

Summary: Authenticity verification platform that ensures genuine human writing through keyboard activity monitoring and statistical signature analysis

Project Overview

Vi-Notes verifies human-written content authenticity by monitoring writing in real-time. Users write freely while the platform captures keyboard activity patterns, typing rhythm, editing behaviors, and statistical signatures that distinguish genuine human composition from AI-generated or AI-assisted text.

Key Features

The platform provides a distraction-free writing interface with silent background monitoring capturing typing speed, pause patterns, deletions, and composition rhythm. The keyboard monitoring tracks micro-patterns like hesitations before sentences, corrections during idea refinement, and variable typing speeds that reflect natural thinking processes.

Statistical analysis examines linguistic patterns, vocabulary diversity, sentence structure variations, and stylistic consistency. AI-generated text exhibits regularities like excessive consistency in sentence length or uniform vocabulary distribution that mismatch keyboard activity. After each session, Vi-Notes generates authenticity reports with confidence scores, suspicious patterns, and supporting evidence. Users share these reports to prove authorship in academic, professional, or publishing contexts. Real-time feedback flags unusual patterns like pasted text chunks or non-human typing behaviors.

Technologies Used

Frontend: React, TypeScript, Electron for native desktop apps accessing keyboard events. Backend: Node.js, Express.js. OS-level APIs capture keystroke timing and patterns. ML models (TensorFlow, PyTorch) use supervised learning on human vs. AI text and unsupervised anomaly detection. NLP models analyze statistical signatures. MongoDB stores sessions, keystroke data, and reports with privacy-preserving encryption.

Detection Methods

The system identifies behavioral patterns difficult for AI to replicate: natural sentence pauses, real-time corrections, variable typing speed based on cognitive load, and micro-pauses at punctuation. These signatures compare against known human patterns.

Text analysis reveals origin clues through variation in sentence length and structure, idiosyncratic vocabulary choices, and correlation between writing complexity and revision frequency. Human writers revise complex sections more than simple ones; AI-pasted text shows no such correlation.

Cross-referencing behavioral data with textual patterns provides the strongest verification. Paragraphs appearing without keystroke data, suspiciously constant typing speeds, or mismatched statistical signatures flag potential AI involvement.

Project Goals

Vi-Notes restores trust in written content through reliable authorship verification, helping educators ensure academic integrity and enabling writers to prove authenticity. The platform develops accurate detection algorithms distinguishing human writing from various AI assistance levels while maintaining strict privacy protection. The system adapts continuously as AI writing tools evolve, staying ahead of new techniques attempting to mimic human writing patterns.

GitHub Repository

Vi-Notes

Upcoming Features

 Basic Writing Editor
Create a simple text editor that allows users to type their content. This editor will be the primary location for writing within the application. It does not need formatting options like bold or headings. The goal is to provide a clean and distraction-free space where text input can be captured reliably.

View Feature Request #1 →
 User Login and Registration
Implement basic user registration and login so that each writing session can be associated with a specific user. Users should be able to sign up using an email and password and log in using the same credentials. Advanced features like roles, password reset are not required at this stage.

View Feature Request #2 →
 Capture Keystroke Timing
While a user is typing in the editor, record basic keystroke timing information. This includes the time difference between key presses and releases. The actual characters typed must not be stored. This data will be used later to understand typing behaviour, not the content itself.

View Feature Request #3 →
 Detect Pasted Text
Detect when a user pastes text into the editor instead of typing it manually. When a paste happens, record that event and the amount of text pasted. This will help differentiate pasted content from naturally typed content in analysis.

View Feature Request #4 →
 Save Writing Session Data
Save the written text along with related session information so it can be accessed later. Each writing session should be stored in a way that links the content to the user and the captured typing metadata. The initial implementation can focus on simple storage and retrieval without complex analysis.

View Feature Request #5 →