Dates
Registrations from 2nd June to 20th June
Seminars by mem0 and IDL : 20th to 24th June [Notification will be provided in the WhatsApp group]
Development Period : 24th June to 24th July
Judging Period : 25th July to 10th August
Final Showcase and Winner Announcement : 15th August 2025
Eligibility
All College and High School Students from India
Solo or teams of upto three members (No prizes will be awarded to any teams with more than three members)
Problem Statement
Debating for Developers
Track A: Debate Learning & Practice
Challenge: Create an AI-powered learning platform that introduces school students/college students to the fundamentals of debating through gamification.
Context: Many school students find debate intimidating or struggle to understand its structure and techniques. A gamified approach can make learning more engaging and accessible.
Requirements:
-
Develop an interactive, gamified learning system that introduces debate formats, rules, and techniques
-
Include progression levels that gradually increase in complexity
-
Provide immediate feedback and explanations
-
Probably stuff like framing rebuttals to arguments, identifying logical fallacies, choosing lines of arguments to prioritize, burden fulfillment intuition etc [using Agents]
-
Incorporate engaging elements like points, achievements, and challenges
-
Create age-appropriate content for different school grades
-
Ensure accessibility across various devices
Expected Deliverables:
-
Working prototype of the gamified learning platform
-
Documentation of the learning path design
-
Evaluation metrics to assess learning outcomes
-
Implementation plan for schools
-
Comprehensive feedback system that refers to improvements on speaker scale
-
Callout of common logical fallacies, instances of non-compliance with manual etc. integrated into your gamified system
Technical Considerations:
-
User experience should be intuitive and engaging
-
Content should be accurate and aligned with standard debating practices
-
System should support a variety of interactive elements (quizzes, simulations, etc.)
-
Solution should be scalable to accommodate growing user base
Evaluation Criteria:
-
Educational Value (30%): Quality and accuracy of debate content, pedagogical approach, learning progression design
-
Gamification Effectiveness (25%): Engagement mechanisms, reward systems, user motivation elements
-
User Experience (20%): Interface design, accessibility, ease of use for target age groups
- Technical Implementation (15%): Code quality, system architecture, performance
- Scalability & Adaptability (10%): Ability to expand to different age groups, debate formats, and user volumes
Track B: Live Simulated Mock Debates
Challenge: Develop an integrated AI system that simulates full debate rounds by generating realistic opponents, delivering high-quality adjudication feedback, and replicating the experience of a full debate environment, from preparation to judgment.
Context: Debate practice often requires finding willing partners for both debating and judging, which can be difficult. AI debate partners could allow for on-demand practice and targeted skill development.
Requirements:
-
Case Prep:
-
Develop an AI tool that helps debaters prepare cohesive cases for given motions
-
Debating:
-
Create AI debaters with adjustable skill levels (beginner, intermediate, advanced)
-
AI should deliver structured speeches appropriate to their assigned position
-
The system should respond to the human debater's arguments appropriately
-
Support for different roles (Prime Minister, Opposition Leader, Whip, Reply, etc.)
-
POIs
-
Implement AI capability to offer Points of Information during human speeches
-
Generate contextually relevant and challenging POIs
-
Respond appropriately to the human's handling of POIs
-
Adjudicators:
-
Develop an AI judge capable of evaluating:
-
Argument quality and logical coherence
-
Rhetorical techniques and persuasiveness
-
Response to opposition arguments
-
Structure and time management
-
Delivery and presentation
-
Generate detailed, constructive feedback in the style of experienced adjudicators
-
Miscellaneous
-
Support multiple debate formats (Asian Parliamentary, British Parliamentary, etc.)
-
Function as a note-taking assistant during live debates
Expected Deliverables:
-
Working prototype of the AI debate practice system with case prep, debating, and adjudicators
-
Documentation of AI debater behavior and capabilities
-
User interface for setting up practice sessions
-
Evaluation of system performance and realism
-
Performance analysis compared to humans
-
Adjudicating:
-
Documentation of evaluation criteria and methodology
-
User interface for accessing and reviewing feedback
-
Reductionist breakdown of performance analysis with accessible CoT. Try to make each decision as mathematical as possible. This could, just as an example, look like assigning each clash in the debate a weight along with a positive(if they win that clash)/negative(if they lose)/0(symmetric) value for each team and then summing them up. However, we expect you to employ more comprehensive and robust methods to avoid bias or wrong calls.
-
Brief description of the algorithm (as above) that your AI judge will use to make decisions.
Technical Considerations:
-
AI debaters must demonstrate understanding of debate structure and strategy
-
System should maintain coherent positions throughout a debate
-
Natural language generation must produce debate-appropriate speech
-
Solution should work seamlessly in real-time
-
The system should maintain context across multiple speeches throughout the debate
-
Evaluation criteria should match established judging criteria.
Evaluation Criteria:
-
Transcription Accuracy (20%): Precision in converting speech to text across different speakers, accents, and debate speeds
-
Case Preparation Quality (5%): Relevance, depth, and variety of arguments and examples generated
-
AI Debate Speech Quality (15%): Coherence, structure, and strategic quality of AI-generated speeches
-
Interactivity (5%): Responsiveness to human arguments, quality of POIs, adaptive behavior
-
Skill Level Differentiation (15%): Clear and appropriate distinction between beginner, intermediate, and advanced AI debaters
-
User Interface (10%): Ease of setup, intuitive controls, session management
-
Judging Quality And Feedback Relevance(15%):
-
Alignment with established adjudication criteria, comprehensiveness of evaluation.
-
Specificity, constructiveness, and actionability of feedback provided
-
Multi-format Support (5%): Adaptability to different debate formats and rules
-
System Performance (10%): Speed, reliability, and resource efficiency in real-world conditions
Track C: Enhanced Tabbying with Gen AI
Challenge: Extend the open-source Tabbycat tournament management system to provide AI-enhanced analytics, feedback, and learning resources.
Context: Tabbycat is widely used for tournament management, but lacks advanced analytics and feedback capabilities. Enhancing it could provide valuable insights to debaters and tournament organizers.
Requirements:
-
Develop extensions to the Tabbycat system that:
-
Transcribe and analyze individual speeches
-
Generate personalized feedback reports for debaters
-
Compile learning trajectories across multiple tournaments
-
Identify patterns and opportunities for improvement
-
Create comprehensive tournament analytics
-
Design user-friendly visualizations of performance data
-
Implement secure data handling and privacy controls
Expected Deliverables:
-
Working extensions to the Tabbycat codebase
-
Documentation for installation and use
-
Data visualization components
-
Performance analysis of the system
Technical Considerations:
-
Extensions must integrate with the existing Tabbycat architecture
-
System should handle large datasets efficiently
-
Solution must respect data privacy and consent requirements
-
Code should follow open-source best practices
Evaluation Criteria:
-
Tabbycat Integration (25%): Seamless integration with existing codebase, adherence to project architecture
-
Analytics Quality (25%): Depth, relevance, and actionability of insights generated
-
User Interface (20%): Clarity, intuitiveness, and information design of data presentations
- Performance & Scalability (15%): Ability to handle multiple tournaments and large participant numbers
- Data Privacy & Security (15%): Implementation of proper consent mechanisms, data protection measures
Prizes
Total Prize of 35,000 INR (in addition to possible pre-placement offers and internship opportunities)
Track A Winner - 10,000 INR
Track B Winner - 15,000 INR
Track C Winner - 10,000 INR
Judging Criteria and Winner Selection
Based on the Deliverables in the Problem Statement
