Banjo.

Banjo.

Scope

Online dialogue platform for academic use

Online dialogue platform for academic use

/

Client

Banjo · Early-stage EdTech Startup

Banjo · Early-stage EdTech Startup

/

Duration

Ongoing project

Ongoing project

/

Year

2025

2025

Banjo - Online dialogue platform for academic use

/

Overview

(01)

Building a digital dialogue platform for academia

We partnered with Banjo to design and launch a 3-month MVP of a web-based dialogue platform for academic settings. The goal was to foster structured, civil, and inclusive conversations in university classrooms—validating discussion flows, testing AI moderation, and gathering data to guide future product development.

/

Challenge

(02)

Reimagining Classroom Conversations for Universities

Polarization and the breakdown of communication in academic settings had made it harder for students to engage honestly and openly. Traditional platforms lacked the structure, safety, and pedagogical grounding needed to support meaningful dialogue. Banjo's challenge was to design and validate an MVP in just three months—one that could facilitate constructive, viewpoint-diverse conversations while minimizing conflict and testing AI-assisted moderation in real classrooms.

/Solution

Solution

(03)

A structured, AI-supported dialogue experience

We designed and built an application supporting two key user types: Professors and Students. Based on a Socratic Seminar pedagogy, professors can create and moderate dialogues, define anonymity levels, set discussion stages, and track engagement. Students participate through claims, replies, and voting across structured stages. The platform integrates role-based access, AI-assisted moderation to detect harmful content, and AI support to help generate discussion prompts. Core engagement features include notifications, reporting tools, and summaries of dialogue outcomes.

/

Testimonial

(04)

In the Client’s Own Words

/Outcomes

Outcomes

(05)

A validated MVP for classroom dialogue

Within three months, Banjo launched its MVP and validated key participation flows in academic settings. Professors successfully created and moderated structured dialogues, while students actively engaged through voting, claims, and replies. The AI moderation system proved effective in reducing harmful content and supporting constructive discourse. Initial surveys collected usability insights and satisfaction metrics, providing clear evidence to inform future product iterations and a roadmap toward scaling beyond the MVP.