Interaction Design, Prompting
Leveraging AI to help job seekers prepare for interviews
Overview
As someone who is currently preparing for interviews for jobs, I noticed that me and many others find behavioral and situational interviews more anxiety inducing than technical interviews. This case study demonstrates my passion project where I designed a web interface leveraging LLMs to help people prepare for such interviews.
My Role
Developed a web-based interface using React and LLama3 model where users can practice behavioral and situational questions and get feedback on their answers.
Tools: Figma, React, v0.dev, TypeScript
Motivation
A quick survey with current senior and graduate students helped me understand their opinions and frustrations about interview processes. I set out to answer the following questions:
How do they currently prepare for interviews, and what are their biggest challenges?
How do they currently evaluate their own interview performance?
"I really struggle with behavorial questions"
-Participant 1
"I have trouble with these (behavioral) types of interviews"
-Participant 2
"…I'm not sure I'm differentiating myself from a bad candidate"
-Participant 3
The Working
Designing the Interface
I started the exploratory sketching and defining the I/O flow for the interface. The focus was on designing a simple and intuitive dedicated interface different from the conventional chatbot interface seen in platforms like ChatGPT, Perplexity, Claude, etc.


I initially experimented with Google Colab and Python to implement the core functionality while using Gradio to design the interface. This allowed me to quickly test individual functions and establish a working MVP prototype with minimal setup.
However, I encountered limitations with Gradio’s API, which restricted my ability to fully customize the UI and interactions as needed. Additionally, Google Colab’s constraints like session duration, managing dependency, and lack of storage made it difficult to use beyond quick, initial prototype.

For the final implementation, I chose React for front-end development because of its component-based architecture, which made it easier to build a modular and maintainable interface. It provided greater flexibility and control, allowing me to fine-tune the design and user interactions.

Iterations
Refining the interface involved optimizing two things
Inputs required to generate relevant question(s) and feedback(s)
Balancing the human and AI touch for the outputs to make it clear and concise
Optimzing the Inputs
A key aspect of the output questions was to make them as realistic and similar to actual interview questions as possible. Initially, generated questions were overly verbose, complex, and unrealistic, making them difficult for users to process effectively. I simplified and optimized the prompts, instructing the LLM to generate shorter, direct, and realistic interview questions. My design decisions for final outputs were guided by three main considerations:
Length: I kept the question length short enough to not overwhelm the user but long enough that they feel realistic and clear. The length of each question also varied in order to simulate a realistic situation.
Follow-ups: The prompt was optimized to either ask users questions based on their answer or ask a new question. This ensured the llm produc.ed variety of questions rather than diving deep on a single question and topic.
Tone of Voice: The questions used friendly, conversational language to ensure users did not have any negative feelings while reading them.
Improving the outputs
A key aspect of the output questions was to make them as realistic and similar to actual interview questions as possible. Initially, generated questions were overly verbose, complex, and unrealistic, making them difficult for users to process effectively. I simplified and optimized the prompts, instructing the LLM to generate shorter, direct, and realistic interview questions. My design decisions for final outputs were guided by three main considerations:
Length: I kept the question length short enough to not overwhelm the user but long enough that they feel realistic and clear. The length of each question also varied in order to simulate a realistic situation.
Follow-ups: The prompt was optimized to either ask users questions based on their answer or ask a new question. This ensured the llm produc.ed variety of questions rather than diving deep on a single question and topic.
Tone of Voice: The questions used friendly, conversational language to ensure users did not have any negative feelings while reading them.
© Framer Inc. 2023