Services
Industries
Company
Services
Industries
Company
Contact Us

Tanoto

AI & ML-powered interview assistance

Project Overview

Tanoto is an app designed to help users practice interviews and receive feedback on their answers' content, pace, eye-tracking, and facial emotions.

CLIENT

CodeLink

Team Model
Front-end Developer

Front-end Developer

Machine Learning Developer

Machine Learning Developer

Back-end Developer

Back-end Developer

Technical Lead

Technical Lead

Quality Assurance Engineer

Quality Assurance Engineer

Product Designer

Product Designer

Product Owner

Product Owner

PLATFORM

Web

TECHNOLOGY

Emotion Classification

Python

Whisper

Face Landmark Detection

ReactJS

MediaPipe Face Detection

NestJS

NextJS

Text to Speech

case-study-summary

Challenge

How can we assist job applicants in improving their interview and presentation skills?

Request

CodeLink had developed their own propriety text-to-speech and speech-to-text AI model. They wanted to run a design sprint to experiment with how they could turn the model into a commercial product. Stakeholders tasked CodeLink's internal teams with running a design sprint and proposing a final tested prototype. The prototype would then be built into an MVP product release.

case-study-summary

Engagement Model

The CodeLink internal team worked as a fully autonomous team to facilitate and run the design sprint, test the prototype, and build out the MVP followed by a fully functional V1 release of the product.

Engagement Length and Scale

The MVP phase took 6 weeks to complete and release for beta testing. We then run another 6-week phase to implement basic authentication and profile management for the V1 release of the product.

case-study-summary

Project Outcome

To start the project, we conducted a 1-week virtual Design Sprint workshop. Our goal was to establish the product's goals, vision, and value proposition. We created a low-fi prototype and held in-person user testing to gain insights into user needs and how to meet them. Our tests validated the product concept, and we began development. To develop the app, the team used a home-grown Text-to-Speech solution and Whisper for Speech-to-Text. We researched and applied real-time face landmark tracking models, such as MediaPipe, and then researched and applied real-time emotion classification models. The final product was fully developed and released to the market.

case-study-summary
Tags

Prototype Testing

Product Design

Artificial Intelligence

Autonomous Team

Product Development

Machine Learning

Design Sprint

Build powerful products faster than your competition.

Contact Us
Subscribe to our Newsletter

Receive the latest news on technology and product development from CodeLink.

Subscribe

CodeLink powers growing startups and pioneering corporations to scale faster, leverage artificial intelligence, and release high-impact technology products.

Contact Us