Summary of "Build & Deploy Full Stack AI Mock Interview App with Next.js | React, Drizzle ORM, Gemini AI, Clerk"
Overview
This tutorial builds a full‑stack AI “mock interview” web app with a typical user flow:
- Landing page → authentication → dashboard.
- From the dashboard users can create new mock interviews, view a list of previous interviews, retake interviews, or view feedback.
- When creating a new interview the user supplies job role, job description/tags, and years of experience.
- During an interview the AI generates N interview questions (JSON). The user can enable webcam & mic, listen to each question (TTS), record spoken answers, navigate questions, and end the interview.
- Recorded answers are transcribed, sent to the AI for feedback and rating, and stored in the database.
- Feedback is presented as an overall rating plus per‑question collapsible details (user answer, correct answer, feedback, rating).
- Optional paid plans / upgrades are supported via Stripe checkout links.
Features / User Flow
- Dashboard: create and list mock interviews; view/retake existing interviews.
- “Add new interview” dialog: form for job position, description/tags, experience level.
- Interview runtime:
- AI generates a JSON payload of questions & answers.
- Questions are shown one‑by‑one with TTS playback.
- User records spoken answers (webcam optional); answers transcribed to text.
- Each answer is evaluated by the AI for feedback and rating.
- Data persistence:
- Store AI Q&A JSON in a
mock_interviewtable. - Store per‑question user answers + feedback in a
user_answerstable.
- Store AI Q&A JSON in a
- Feedback page: aggregate rating and collapsible per‑question feedback.
- Billing: Stripe payment links for upgrade flows.
Main technologies & libraries used
Frontend framework
- Next.js (app router, folder‑based routing, root layout)
- React (client components, hooks)
Styling & UI
- Tailwind CSS
- shadcn/ui components (Dialog, Button, Input, Textarea, Collapsible, Toast, etc.)
- HyperUI snippets/templates for sign-in UI
- Lucide React icons
Authentication
- Clerk (sign-in / sign-up, social logins, ClerkProvider, middleware to protect routes, user button, Clerk Elements for custom UI)
Database & ORM (backend)
- PostgreSQL (serverless provider: Neon)
- Drizzle ORM (schema in JS, drizzle.config.js)
- drizzle‑kit for migrations (
db push) and Drizzle Studio for inspection uuidfor unique mock IDs;momentfor timestamps
AI / LLM integration
- Google Generative AI (Gemini / “Jini”); Google AI Studio
@google/generative-ai(npm package)- Prompt engineering to return structured JSON for Q&A and for feedback
Media & speech
react-webcamfor camera previewreact-use-speech-to-text(useSpeechToText hook) for transcription- browser
speechSynthesisfor text‑to‑speech playback
Deployment & tooling
- Git + GitHub
- Vercel for deployment (with environment variables)
- Stripe for checkout/subscriptions (payment links)
- VS Code extensions: ES7+ React snippets, Tailwind CSS IntelliSense
Architecture & workflow details
Authentication
- Clerk wraps the Next.js app via
ClerkProvider. middleware.jsis used to protect selected routes (dashboard, interview routes).- Custom sign‑in / sign‑up pages are created and grouped under a non‑route folder (e.g.,
(auth)) and can use Clerk Elements for customization.
AI Q&A generation
- On interview creation, the client builds a prompt (job position, description, experience, count) and calls Google Generative AI to produce JSON of questions + answers.
- The response must be sanitized to remove stray prefixes/garbage before parsing JSON.
- The raw AI JSON string is stored in
mock_interview(columns:mock_id,created_by,created_at,job_position,job_description,job_experience,json_mock_response).
Interview runtime & feedback
- Questions are presented one at a time; user can play TTS for the question.
- User records answers with the mic; speech is transcribed to text via
useSpeechToText. - When recording stops, the app sends the question + user_answer to the AI (feedback prompt). The AI returns structured feedback + rating (JSON).
- Each feedback entry is stored in
user_answers(columns:mock_id_reference,question,correct_answer,user_answer,feedback,rating,user_email,created_at).
Database & ORM
- Drizzle schema is defined (e.g.,
utils/schema.js). - Use
npx drizzle-kit pushto migrate andnpx drizzle-kit studioto view/inspect data. - Neon is used as the serverless Postgres provider; connection string is stored in environment variables.
UI & UX
- Responsive header with menu visibility toggles by screen size.
- Cards for interview entries; modal dialog (shadcn Dialog) for creating interviews.
- Collapsible items on the feedback page for per‑question details.
- Loading states and disabled buttons while generating/processing.
Deployment
- Push repository to GitHub, import into Vercel, set environment variables, and deploy.
- Vercel supports live preview and custom domains.
Developer notes / gotchas
- Use
npx create-next-app@latestto scaffold the project. - Ensure React version compatibility (some packages may require specific React versions; adjust
package.jsonif necessary). - Clerk middleware defaults to public routes; explicitly configure
protectedRoutespatterns to secure pages. - AI responses may include stray text before the JSON—sanitize the response before
JSON.parse. - When recording audio and updating state, be mindful of async state updates; use
useEffectguards (e.g., ensure recording has stopped and answer length exceeds a threshold before sending to the AI). - Client‑visible API keys should use
NEXT_PUBLIC_prefix (example: Google Generative AI key in the tutorial). - Add all necessary environment variables in Vercel prior to deployment.
Commands & toolbox (representative)
- Project setup and dev server:
npx create-next-app@latestnpm run dev
- Install packages:
npm i @google/generative-ainpm i react-webcam react-use-speech-to-text uuid momentnpm i drizzle-kit @drizzle/orm(plus provider packages)
- Drizzle:
npx drizzle-kit pushnpx drizzle-kit studio
- Git:
git initgit add .git commit -m "Initial commit"git push origin main
- Deployment:
- Import GitHub repo into Vercel and set environment variables
Tutorial structure / guide sections
- Project setup: create Next.js app, folder structure, Tailwind setup
- UI setup: shadcn components, Tailwind + shadcn integration, component creation
- Authentication: Clerk sign-in, middleware, custom sign-in page, Clerk UIKit vs. Clerk Elements
- Header & dashboard layout, responsive design
- Add new interview dialog: form handling, client validation, state management
- Google Generative AI integration: API key, prompt design for JSON Q&A, SDK usage
- Backend DB setup: Neon (Postgres) + Drizzle ORM, schema,
drizzle.config.js, migrations - Persisting AI Q&A and returning
mock_id - Interview runtime page: dynamic Next.js routes, webcam, speech‑to‑text and TTS
- Sending user answers to AI for feedback; storing
user_answersentries - Feedback page: aggregate rating, collapsible per‑question feedback
- Dashboard: list of interviews, retake, view feedback
- Stripe payments (upgrade plans) integration via payment links
- Deployment: GitHub repo → Vercel, environment variables, live site
Main speakers / sources
- Primary tutorial author: Tube Guruji (YouTube)
- Documentation and services referenced:
- Clerk (clerk.com)
- shadcn/ui docs
- Tailwind CSS docs
- Neon (Postgres)
- Drizzle ORM & drizzle‑kit docs
- Google AI Studio / Generative AI (Gemini) docs
react-webcamdocsreact-use-speech-to-textdocs- Vercel docs
- GitHub, Stripe
Optional extras I can provide: - Exact file‑by‑file implementation checklist (file names, key code snippets) - Short prompt template used for generating interview Q&A and feedback - List of environment variables to add in Vercel for deployment
Category
Technology
Share this summary
Is the summary off?
If you think the summary is inaccurate, you can reprocess it with the latest model.
Preparing reprocess...