Skip to content

BE-18: Feedback Panel Research (Rating, Correctness, Length) #18

@tecnodeveloper

Description

@tecnodeveloper

Description:
Research how to build a feedback system for chatbot responses. Users will rate each bot reply on quality, correctness, and length. This data will be stored in MongoDB and used for analytics and model improvement.


User Story

Given user receives a chatbot response
When user interacts with feedback panel
Then system should capture and store structured feedback


Tasks


Understand Feedback System

  1. Define Purpose of Feedback

    • Improve chatbot accuracy
    • Track response quality
    • Collect user satisfaction
  2. Identify Feedback Types

    • Rating (1–5 stars)
    • Correctness (Yes / No / Partial)
    • Length (Too short / Good / Too long)

UI/UX Research

  1. Study Feedback UI Patterns

    • Star rating systems
    • Thumbs up/down systems
    • Dropdown selection systems
  2. Decide UI Style

    • Simple panel below message
    • Inline feedback buttons
    • Expandable feedback section

Frontend Design

  1. Create Feedback Panel Component

    • Rating selector
    • Correctness selector
    • Length selector
  2. Attach to Each Message

    • Only bot messages
    • Unique message_id tracking

Backend Design

  1. Create Feedback API

    • POST /feedback/submit
    • Accept message_id + rating data
  2. Validate Input

    • Rating range check (1–5)
    • Required fields validation

MongoDB Storage

  1. Create Feedback Collection

    • message_id
    • user_id
    • rating
    • correctness
    • length_feedback
    • timestamp
  2. Ensure Linking

  • Connect feedback → message → session

Data Flow Design

  1. Frontend Flow
  • User clicks feedback
  • Send to backend API
  • Show confirmation
  1. Backend Flow
  • Receive feedback
  • Store in MongoDB
  • Link to message

Feedback Analytics Preparation

  1. Define Metrics
  • Average rating per session
  • Accuracy score
  • Response quality trends
  1. Prepare for Dashboard
  • Store structured data
  • Enable aggregation queries

Postman Testing 🧪

  1. Setup Postman
  • Create POST request /feedback/submit
  1. Test Payload
{
  "message_id": "abc123",
  "user_id": "user1",
  "rating": 4,
  "correctness": "yes",
  "length": "good"
}
  1. Validate Responses
  • Success response
  • Error handling
  • Missing fields

Edge Cases

  1. Handle Invalid Data
  • Rating out of range
  • Missing message_id
  • Duplicate feedback
  1. Prevent Spam
  • One feedback per message
  • User restriction

Performance Considerations

  1. Efficient Storage
  • Lightweight documents
  • Indexed message_id
  1. Scalability
  • High feedback volume handling
  • Fast aggregation queries

Integration with Chat UI

  1. Attach to Chat Messages
  • Show after bot response
  • Enable quick rating
  1. Update UI State
  • Show submitted status
  • Disable duplicate feedback

Acceptance Criteria

  • Feedback system defined clearly
  • UI structure designed
  • Backend API planned
  • MongoDB schema finalized
  • Postman testing completed
  • Linked to chat messages

Testing Steps

  1. Simulate chat response
  2. Submit feedback
  3. Check MongoDB storage
  4. Validate linking with message
  5. Test duplicate prevention

Definition of Done

  • Feedback system fully designed
  • Data structure finalized
  • API ready for implementation
  • Connected to chatbot flow

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    Status

    Backlog

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions