
“Once again Saltmarch has knocked it out of the park with interesting speakers, engaging content and challenging ideas. No jetlag fog at all, which counts for how interesting the whole thing was."
Cybersecurity Lead, PwC
In a world where Artificial Intelligence (AI) is evolving rapidly, prompt engineering has emerged as a crucial skill in harnessing the power of large language models (LLMs). As companies venture into this new frontier, the demand for talented Prompt Engineers is skyrocketing.
This guide is designed for those aspiring to break into this exciting and rewarding field. It draws on insights from hiring managers at some of the leading AI companies and presents the most informative and impressive responses given by candidates during real interviews.
This edition of Saltmarch Media’s "Inside the Interview Room" series takes you on a deep dive into:
By the end of this guide, you'll not only be armed with the knowledge to ace your interviews but also gain a deep understanding of the expectations and responsibilities of a Prompt Engineer. This guide serves as your stepping stone into the world of AI, helping you build a strong foundation and make a lasting impression in your interviews.
Join us in this journey to explore, learn, and prepare for your success in the AI revolution!
As the interview kicks off, the interviewers lean in and ask, "We're eager to dive into the depths of your experience with Large Language Models. Do you recall a moment when you had to get hands-on, crafting a particularly intricate and innovative prompt architecture? Or maybe there was a time when you were deep into analyzing the behavior of an LLM in a systematic way? Do tell, we're all ears!"
An engineer passionately shares their experiences working on a challenging project. "I remember one project particularly well," the candidate begins, a gleam in their eye. "We had to design a system to provide tailored customer support responses using an LLM."
The project revolved around an online service provider inundated with customer queries ranging from technical issues to billing inquiries. "We faced a significant challenge creating prompts that could handle such a wide array," the candidate confesses. "We needed responses to be concise, accurate, and, above all, helpful."
[Spotlight] "The major challenge was designing prompts that effectively covered a wide range of possible queries while keeping the responses concise and accurate. By iteratively testing and refining the prompts, I was able to significantly improve the accuracy and usefulness of the model's responses."
Understanding the client's business was step one. They dived into the backlog of support tickets, reviewing common queries and their range of topics. Based on this analysis, a set of prompts were crafted, designed to guide the LLM to provide clear, precise, and accurate responses.
But the process was not without its hurdles. "We tested these prompts using a subset of historical support queries, and we realized that while the model performed well for simple queries, it struggled with more complex, multi-part questions," the engineer recalls. This obstacle was tackled by refining the prompts, experimenting with different levels of specificity and structure. "For instance, for complex queries, I tried breaking them down into sub-questions, which the model would address individually before consolidating the answers."
Post deployment, the LLM's performance improved significantly. "The model was able to handle a significant portion of the common, repetitive queries, allowing the human support team to focus on more complex and unique customer issues. This led to faster response times overall and improved customer satisfaction scores," the candidate beams, clearly proud of their accomplishment.
[Interviewer Notes] The candidate's systematic approach to understanding the problem, iterating on the design, and learning from the outcomes demonstrates strong problem-solving skills. They tackled a complex issue, made key improvements, and achieved a significant positive impact on the client's business. This answer exhibits a good balance of technical knowledge, practical application, and a deep understanding of the client's needs.
Indeed, it was a highly rewarding project, a testament to the practical benefits and potential challenges of using LLMs in a real-world setting. "Moreover," the candidate concludes, "the learnings from this project about how to handle complex queries were documented and shared with the wider team, improving our overall prompt design capabilities."
As the interview continues, one of the interviewer asks, "Imagine you're having a casual chat with someone who doesn't come from a tech-savvy background. They're intrigued about the work you do and ask you about these 'Large Language Models' you often talk about. How would you simplify the complexities of the architecture and functioning of these models so they could understand?"
"Think of an LLM like an author," they begin, painting a vivid picture. "An extremely well-read author that's taken in billions of pages of text. But this author is extremely literal and lacks the human capacity to understand the world."
[Spotlight] "Think of a large language model as an extremely well-read but highly literal author."
The candidate then gives an example, "Suppose you asked this 'author', who won the World Cup in 2023? The author can't answer as we would because it doesn't learn new information after it was last trained. Its knowledge ends in 2021, and anything after that is unknown to it. We have all encountered this, for example, with ChatGPT with some of our prompts. But it can generate a believable response based on patterns from its training data."
"And about its architecture," they continue, leaning forward, "It's composed of layers upon layers of artificial neurons. These are mathematical models inspired by how we believe neurons work in our brains. These layers work together to recognize patterns in the text it's read."
[Spotlight] "The architecture of an LLM is composed of layers upon layers of artificial neurons. These layers work together to recognize patterns in the text it has read."
The candidate then illustrates the concept further. "It's like recognizing a cat in a picture. You recognize the edges, then shapes like circles and triangles. More complex structures like a face or a tail come next, and finally, you realize it's a cat. That's how these layers in an LLM work to recognize patterns and generate text."
They finish their explanation with a sense of awe for their field. "LLMs are a fantastic intersection of linguistics, machine learning, and problem-solving. There's so much more to learn and explore!"
[Interviewer Notes] The candidate delivered an excellent non-technical explanation of large language models. Their analogy of an LLM as a literal, well-read author aptly captured the essence of LLMs, and their depiction of the architecture and operation of LLMs gave a clear, accessible insight into their complexity and sophistication. The candidate's enthusiasm for the field also shone through, making them an engaging communicator. Follow-up questions could explore the training process and how the model learns from patterns in its training data.
The interview takes a hypothetical turn when one of the panelists leans forward and says, "Picture this: your task for the day is to design a prompt that generates a summary of a company's financial report. A complex task, isn't it? Now, we're interested to know, what key factors would be on your checklist while tailoring this prompt?"
"Let's take a fictional company, TechMarch, and its Q2 2023 financial report as our example," the engineer proposes. "Assuming the report has all the usual elements, from revenue and net income to business operations and future plans, we have quite a job on our hands."
[Spotlight] "I would design a prompt that first asks the model to extract key financial figures and trends from the report. Then, I would have the model generate a summary that provides these details in a straightforward and easily understandable manner."
The candidate explains that the first step is understanding the content. "As a Prompt Engineer, I'd need to dive into the report, identifying the key financial figures and sections that need summarizing. In TechMarch's case, that might be revenue, net income, key expenditures, significant business events, and future strategies."
Next, they tell us about the initial prompt. "We'd want to guide the LLM to extract the necessary information and present it in a concise summary." They provide an example, which instructs the model to present key information from the financial report in a clear, straightforward manner.
[Spotlight] "Given the following report from TechMarch for Q2 2023, provide a summary that includes the company's revenue, net income, major expenditures, significant business events, and future plans. Please present this information in a clear, straightforward manner."
But the work doesn't stop there. The engineer emphasizes the importance of iterative refinement. "The prompt might need tweaks based on the LLM's output. For instance, if the model isn't providing clear explanations for major expenditures and business events, we might refine the prompt to specifically ask for these."
And finally, there's validation. "We'd need to test the refined prompt with the actual report to ensure it generates a clear, accurate, and useful summary. This process continues until we've got a prompt that consistently delivers high-quality summaries for various company reports."
[Interviewer Notes] Our candidate's systematic, four-step approach shows their methodical problem-solving skills. They emphasize the importance of understanding the content, crafting an initial prompt, refining it based on the LLM's output, and validating the final prompt. Their answer provides a clear, real-world example and demonstrates their ability to tackle complex tasks. They also highlight the iterative, test-driven nature of prompt design, indicating a strong understanding of the process.
As the interview progresses, one of the panelists interjects with a question, "When it comes to prompt engineering, testing and evaluation is as vital as crafting the prompt itself, wouldn't you agree? Can you tell us about the strategies you've previously employed to test and evaluate your prompts? It would be really enlightening if you could recall a specific episode where your meticulous testing brought to light a golden opportunity for improvement."
The engineer's eyes light up as they dive into a story from their past, detailing how they utilized A/B testing to improve prompt performance significantly.
"I've used a variety of methods to evaluate prompts, but one of my go-to techniques is A/B testing," they begin, "comparing different prompt strategies on the same task to see which is most effective."
[Spotlight] "In one instance, my testing revealed that a more explicit, detailed prompt significantly reduced model errors on a complex task, leading to an improved user experience."
They explain how they were tasked with using a large language model to help draft customer support responses for a software company. "The objective was to provide support staff with a high-quality draft response they could edit and personalize before sending it off to the customer."
The engineer details the two prompt strategies they developed: an Implicit Prompt, which simply asked the model to "draft a response," and an Explicit Prompt, which provided more specific guidance to the model.
They continue, "We conducted A/B testing by feeding both prompts the same set of customer queries and comparing the generated responses. Our evaluation criteria were accuracy, clarity, and tone."
[Spotlight] "While the Implicit Prompt often produced satisfactory responses, it wasn't always consistent in maintaining the desired structure and tone. The Explicit Prompt, on the other hand, consistently generated responses that met all our evaluation criteria."
The engineer then elaborates on a specific example: a customer query about difficulty installing a software update. They compare the responses generated by the Implicit and Explicit Prompts, illustrating how the Explicit Prompt provided a more structured, empathetic response that was favored by the support staff.
[Interviewer Notes] Our candidate's approach to testing and evaluating prompts demonstrates their strong analytic abilities. They clearly understand the importance of methodical testing and have experience with using A/B testing to compare the effectiveness of different prompt strategies. Their anecdote about the explicit, detailed prompt improving model performance reveals a keen understanding of how to refine prompts to enhance user experience. Their rigorous testing procedures show their commitment to delivering the best possible results.
"Based on our findings, we adopted the more explicit prompt strategy for this task," they conclude, a satisfied smile on their face. "This led to an improved user experience, as the support staff found the drafts more useful and less in need of editing."
Transitioning to the next topic, the interviewer leans in and asks, "You know, fostering a culture of knowledge sharing is very important to us here. Let's say you've just nailed down a successful approach to prompt engineering. How would you go about documenting and disseminating these best practices across our team? I'd love to hear about a time in your career when you did something similar."
"I strongly believe in making documentation clear and accessible," they begin, reflecting on their past experiences.
[Spotlight] "In the past, I've created a shared document outlining best practices for prompt engineering. This included examples and resources for further learning and became a go-to resource for my team, improving the quality of our work and efficiency."
They detail how the documentation should be structured to allow for easy navigation and emphasize the importance of using clear and straightforward language. "Avoid jargon when possible, and when technical terms are necessary, provide explanations or definitions," they recommend.
The engineer also emphasizes the value of examples to aid understanding, especially when dealing with complex or abstract concepts, and the importance of making the documentation accessible to all team members.
They then dive into an example from their own experience. "I created a document that listed all of the best practices I learnt from actual experience, such as the best approach to understanding the task at hand, crafting the prompt, resting and iteration, evaluation etc." They also note that they included deep dives, case studies, and resources for further learning in the documentation.
[Spotlight] "Each section in the document would go into detail, providing clear explanations, tips, and examples. For instance, the 'Crafting the Prompt' section might offer explicit tips on how to craft effective prompts, specify the desired output format, and guide the model for complex tasks."
[Interviewer Notes] Our candidate understands the importance of clear and accessible documentation in facilitating team learning and improving efficiency. Their experience creating a resource for best practices in prompt engineering shows their initiative and dedication to improving team performance. The detailed approach they described, including the use of straightforward language, examples, and a logical structure, demonstrates their capacity for effective communication.
Switching gears a bit, the interviewer asks with a hint of curiosity, "The world of AI, especially areas like prompt engineering, is always evolving and advancing rapidly. We're curious to know, how do you manage to keep your finger on the pulse and stay current with the latest research and trends in this ever-changing field?"
"I regularly attend AI conferences and webinars, follow key AI researchers on Twitter, and read journals like 'AI and Society'," they disclose. The candidate's dedication to ongoing learning is evident.
[Spotlight] "I've recently been immersed in research on making large language models more explainable, and have been experimenting with these techniques in my projects."
They share how AI conferences and webinars, social media platforms, and AI journals all play a crucial role in keeping their knowledge current. Moreover, they express their intrigue with the recent focus on making AI models more explainable and interpretable, citing this as a critical step toward ensuring trust, fairness, and accountability in AI systems.
Their passion for the subject is palpable as they dive into their specific strategies for staying current in the realm of prompt engineering.
[Spotlight] "I delve into the latest advancements in large language models, engage with the AI community on various platforms, and read relevant research papers. Moreover, I find a lot of value in continual experimentation and interdisciplinary learning."
They emphasize how their immersion in related fields like linguistics, cognitive psychology, and communication studies has shaped their understanding of prompt engineering. Their previous experiences in prompt engineering training courses and workshops are also mentioned as instrumental in their learning journey.
[Interviewer Notes] The candidate clearly demonstrates their commitment to staying up-to-date with the latest research and trends in AI and prompt engineering. Their proactive approach to continuous learning, which involves attending conferences and webinars, engaging with the AI community, reading relevant research, and experimenting, demonstrates a high level of engagement and dedication to the field. Furthermore, their interest in the ethical implications of AI, particularly in explainability and interpretability, is commendable and aligns with our company's commitment to responsible AI practices.
Wrapping up, they note, "The nature of prompt engineering requires both keeping up with technical advancements in AI and a continual honing of one's intuition and language skills. Both aspects are crucial for creating prompts that can effectively guide an AI model's behavior."
As the conversation progresses, the interviewers lean forward, clearly interested in the candidate’s experience with client-facing roles. They ask, "We often collaborate closely with large enterprise clients on their prompting strategies here. We'd love to hear about a time from your past work when you had to build a strong relationship with a client, truly get under the skin of their needs, and craft a custom solution for them. Could you share that story with us?"
The candidate dives into a story of how they worked hand-in-hand with a large e-commerce company to enhance their customer service chatbot. They paint a picture of the challenge: the need for the bot to handle more complex queries, improve response accuracy, and offer a more engaging, human-like conversational experience to the customers.
[Spotlight] "In the initial stages, we had multiple meetings with the client's customer service and technical team. We also analyzed their chat logs to understand the common types of interactions, the bot's current performance, and areas for improvement."
The candidate reveals how they developed an initial set of prompts for various customer query categories and employed an iterative approach for refining the prompts based on the bot's responses. They also highlighted the importance of client feedback in ensuring the bot's responses fit the brand voice and customer service philosophy.
[Spotlight] "Once we reached a level of performance that the client was happy with in a simulated environment, we moved to a limited live pilot. The feedback from real users was invaluable."
The story concludes on a high note with the enhanced chatbot being able to handle a significantly larger volume of queries accurately, leading to improved customer satisfaction scores.
[Interviewer Notes] The candidate clearly demonstrates their ability to work closely with large enterprise customers, understand their needs, and design tailored solutions. Their approach, which involved a mix of client engagement, data analysis, iterative design, and responsiveness to feedback, led to a successful outcome. They also show a strong understanding of the specific challenges involved in designing chatbot prompting strategies and the importance of fitting the bot's responses to the brand voice. Overall, their experience aligns well with the needs of the role.
Finally, they stress, "This project reinforced the importance of understanding the client's specific needs, effective communication, and being responsive to feedback and changing requirements."
The interviewers segue into the next question. They say, "Now, let's ponder over the bigger picture for a moment. The work we do here in prompt engineering has far-reaching societal implications. We're interested in knowing how you've kept safety and beneficial impact at the forefront in your previous work. Could you share how you've navigated these considerations?"
The candidate delves into the societal impacts of AI, emphasizing the dual-edged nature of AI technologies and the responsibility to balance potential risks with beneficial outcomes.
[Spotlight] "The societal impact of AI, including issues like bias, misinformation, and unequal access, is a key consideration in my work."
They share an instance where they encountered the potential for bias in an AI tool for reviewing resumes, underscoring how bias can seep in when certain demographic groups are underrepresented in training data. They reveal the steps they took to anonymize personal information and ensure the AI's performance was fair across different demographic groups.
[Spotlight] "To avoid potential biases, we incorporated anonymization processes that stripped off personally identifiable information before the AI analyzed the resumes."
The candidate switches gears to the risk of AI systems generating misinformation, sharing a compelling example from their work on a health advice chatbot. They explain how they collaborated with healthcare professionals to ensure accuracy and set up feedback loops for continuous improvement.
[Spotlight] "We collaborated with healthcare professionals to ensure the advice provided by the bot was accurate. We also implemented feedback loops where users could report incorrect information."
They also address the issue of unequal access to AI technologies, highlighting their commitment to developing lightweight AI applications that can run on various devices and advocating for multilingual models.
[Spotlight] "I advocate for developing multilingual models and applications to ensure non-English speakers can also benefit from these technologies."
[Interviewer Notes] The candidate has an excellent understanding of the societal impact of AI, and has shown clear examples of how they have incorporated considerations around bias, misinformation, and access in their work. Their commitment to fairness, reliability, and accessibility in AI is clearly evident. They seem well-prepared to navigate the ethical and societal challenges associated with AI, which is highly relevant to our work in prompt engineering.
The candidate concludes by reaffirming their commitment to the societal impact of AI, stating, "Through thoughtful design, rigorous testing, and continuous learning and iteration, I strive to develop AI systems that are fair, reliable, and accessible to as many people as possible."
Moving on from the societal impact of AT, the interviewers are keen to understand the candidate’s independence and learning capabilities. "We've all had those moments when we're thrown in the deep end with a new technology or project, haven't we? Can you share an instance when you had to quickly get up to speed all by yourself, with little to no guidance? Also, we'd love to understand your philosophy towards learning and growth in your professional life. Do tell us about it!"
The candidate recalls an instance from a few years ago when OpenAI launched GPT-3, a significant upgrade from GPT-2. They were working as a prompt engineer on a chatbot project at that time, and felt the need to quickly understand the new capabilities of GPT-3 to see how their project could benefit from it.
[Spotlight] "As a prompt engineer working on a chatbot project at that time, it was imperative for me to quickly understand the new capabilities and potential applications of GPT-3 to see how we could benefit from it."
They dove headfirst into the research paper published by OpenAI about GPT-3, gaining understanding about the model's architecture, training methodology, and its improvements over GPT-2. They supplemented this by following online discussions, enriching their theoretical understanding with practical insights, gotchas, and innovative uses of the model.
[Spotlight] "I started by diving into the research paper published by OpenAI about GPT-3. I also followed discussions on online forums like Reddit, GitHub, and Stack Overflow."
Believing firmly in the value of hands-on learning, they began experimenting with the GPT-3 model through the OpenAI API, starting with basic examples and moving on to more complex prompts relevant to their chatbot project. This allowed them to build a hands-on intuition about the workings of GPT-3.
[Spotlight] "I accessed the GPT-3 model through the OpenAI API and started experimenting with it... I played with various prompt designs and gradually built an intuition about what works well and what doesn't with GPT-3."
Through this process, they managed to significantly improve their chatbot's performance, utilizing GPT-3's increased context window and nuanced understanding of prompts to handle more complex user queries.
[Spotlight] "I discovered that GPT-3's increased context window and improved understanding of nuanced prompts allowed us to handle more complex user queries... I redesigned our prompts accordingly, which led to improved user satisfaction with our chatbot."
[Interviewer Notes] The candidate shows a clear methodical approach to learning and adapting to new technology. They demonstrated the ability to independently learn about GPT-3 and apply their understanding to improve their project significantly. They show a commitment to continuous learning and sharing their knowledge, reflecting a growth mindset that would be beneficial in our fast-paced work environment.
Reinforcing the idea of learning as a continuous, communal process, the candidate shares that they continue to stay updated on new developments related to GPT-3 and share their learnings with the wider community.
[Spotlight] "I believe that learning is a continuous, communal process, and staying current in our field requires both learning from others and sharing our own knowledge."
They wrap up with the assurance of bringing this growth mindset to your team and contributing to the continuous innovation required in the fast-paced field of AI.
"Team dynamics can be quite a rollercoaster ride sometimes, right? We've all faced tricky situations or conflicts within our teams. Could you walk us through a scenario when you came across a substantial challenge or disagreement in your team? We're really interested in hearing about your approach to the issue and the final result. Don't hold back on the details!"
The candidate recalls an instance from a previous job where they were working on an AI-powered assistant for an insurance company. The complexity of the task led to a divide within their team, with one group advocating for a simple prompt design, and another favoring more explicit prompts.
[Spotlight] "At my previous job, we were working on an AI-powered assistant for an insurance company. A divide emerged within our team, with one group advocating for a simple prompt design, and another favoring more explicit prompts."
As a senior member of the team, the candidate stepped in to facilitate a resolution. They arranged a meeting, ensuring open communication among all team members. They also proposed a problem-solving approach, suggesting that they run experiments to test both strategies and let the results guide their decision.
The candidate then coordinated with the team to execute the plan, testing variations of both simple and explicit prompts. The results showed that a hybrid approach worked best.
[Spotlight] "I coordinated with the team to design and run the experiments... The results showed that a hybrid approach worked best: simpler prompts worked well for straightforward queries, while more complex queries benefited from explicit prompts."
[Interviewer Notes] The candidate demonstrated strong leadership and problem-solving skills in navigating a complex situation. Their ability to facilitate open communication, encourage collaborative problem-solving, and guide the team towards a data-driven decision is impressive. They also show the ability to learn from the experience and improve team processes.
After sharing the results with the team and reaching consensus on the hybrid approach, they learned the importance of testing and data-driven decision-making, which became a standard part of their work.
The candidate wraps up by reaffirming their belief in open communication, collaborative problem-solving, and data-driven decision making, promising to bring these values to the interviewer company’s team and foster a collaborative, innovative, and high-performing culture.
[Spotlight] "This experience reinforced my belief in the importance of open communication, collaborative problem-solving, and data-driven decision making. I will bring these values to your team and foster a collaborative, innovative, and high-performing culture."
Have questions or comments about this article? Reach out to us here.
Banner Image Credits: Steve Scott at Great International Developer Summit
“Once again Saltmarch has knocked it out of the park with interesting speakers, engaging content and challenging ideas. No jetlag fog at all, which counts for how interesting the whole thing was."
Cybersecurity Lead, PwC
“Very much looking forward to next year. I will be keeping my eye out for the date so I can make sure I lock it in my calendar."
Software Engineering Specialist, Intuit
“Best conference I have ever been to with lots of insights and information on next generation technologies and those that are the need of the hour."
Software Architect, GroupOn
“Happy to meet everyone who came from near and far. Glad to know you've discovered some great lessons here, and glad you joined us for all the discoveries great and small."
Web Architect & Principal Engineer, Scott Davis
“Wonderful set of conferences, well organized, fantastic speakers, and an amazingly interactive set of audience. Thanks for having me at the events!"
Founder of Agile Developer Inc., Dr. Venkat Subramaniam
“What a buzz! The events have been instrumental in bringing the whole software community together. There has been something for everyone from developers to architects to business to vendors. Thanks everyone!"
Voltaire Yap, Global Events Manager, Oracle Corp.