Prompt Engineering vs RAG for Editing Resumes

-

accomplishments and qualifications, I’m seeing a lower yield of job application to interview, especially throughout the past 12 months or so. In common with others, I actually have considered Large Language Models (LLMs) to help with resume creation and editing. Ideally, it’s best to create a brand new resume for each job you apply for, tailoring the way you phrase your job experience to match the wording and peculiarities of the particular posting and company. In the event you are at the least mid-career, it’s best to have more work experience than can fit on a resume and might want to determine what to go away out of the resume. LLMs might help summarize, rephrase, and choose essentially the most relevant work experience to tailor a resume for a selected job posting. 

In this text, we will likely be using prompt engineering and Retrieval-Augmented Generation (RAG) in Azure to complement LLMs in writing a resume. LLMs might help write resumes without RAG, but using RAG allows us to experiment with RAG and determine if RAG leads to higher resumes. We can even compare a full LLM to a smaller language model. To check different cases, we use the next metrics (per Microsoft):

Groundedness: Groundedness evaluates how well the models answers align with information from the input source. LLMs should provide responses that are based on provided data. Any responses outside provided context are undesirable for writing a resume; we don’t want the LLM to make up work accomplishments!

Relevance: Relevance measures how pertinent model responses are to given questions. For our case, the LLM should provide resume content relevant to the given job description. 

Coherence: Coherence evaluates if provided language is obvious, concise, and appropriate. This is very necessary in resumes, where brevity and clarity are key.

Fluency: Fluency measures how well the LLM adheres to proper rules of English. Resume content must have proper grammar and spelling.

We’ll evaluate resume generation across three cases: 1) prompt engineering only, 2) RAG resume, and three) RAG resume on a distinct base model. Evaluation will likely be qualitative in keeping with the metrics above, with each scored from lower to higher as unsatisfactory, marginal, or satisfactory.

To have an LLM write the resume we must provide experience. This is generally easily done with prompt engineering. Prompt engineering is a way of guiding LLMs to offer more helpful answers. LLMs are trained on very broad data sets (just like the web) to offer them as much insight to human language and patterns as possible. Nevertheless, this implies they need context to offer specific (and helpful) responses. Prompt engineering techniques are ways of interacting with LLMs to enhance their responses. 

To make use of prompt engineering to assist write a resume, we offer the context (that the LLM goes to assist us with a resume) and supply it work experience so it has data to attract from. Next, we offer the job posting and guide it through writing a resume. Nevertheless, with an extended job history we may encounter token limits, requiring either editing down experience or increasing the token limits (and thus cost) with the LLM and interface.

We’ll use Azure to conduct this experiment code-free. We start with prompt engineering using the chat-gpt-4o foundation model. We’ll follow the Azure tutorial for making a RAG-based app. To start, follow all steps of “Create Foundry hub” and “Deploy Models” sections of the tutorial. For under using prompt engineering, skip the subsequent few sections and go to “Text the Index within the Playground” and do the primary two steps to deploy a chapt-gpt-4o foundation model. 

I’ll give as much detail on methods as possible, but I won’t provide my very own resume/work experience or provide the particular job posting I’m using for privacy and professionalism concerns. 

We’ll mostly use the identical prompts for every case. The prompt engineering case will add a step where we offer a master resume for the LLM to make use of as reference material. The prompts are adapted from a LinkedIn article on using prompts to jot down a resume with LLMs. To start, we offer a system message within the “give the model instructions and context” box within the Azure playground. The system message is:

.

The system message provides the fundamental task to the LLM (write resumes) together with general guidelines around groundedness (don’t make up accomplishments), relevance (position…to my goal job posting), and coherence (avoiding redundancy and cliché terms). 

Now we offer a master resume. The prompt I used is: “.” I then provided a master resume. I won’t use this prompt when using RAG.

Next, we give the LLM more context on the hiring company:  I followed with the corporate information from the job posting; often job postings begin with a paragraph or two concerning the company. Then I provided the job posting arrange with the next prompt:

 To assist goal the LLM and supply more context for resume bullets, I next asked “” after which “”. The goal of those questions is to extend the relevance of provided bullets and summaries. 

Now it’s time to begin generating resume content. I had already selected a rough format for the resume: begin with a paragraph summary, then provide 3-5 bullets for my two most up-to-date jobs, after which 1-3 bullets for others. I conclude with an education section and summary of key relevant skills. The LLM will provide the whole lot however the Education section. 

First, I asked it to offer a summary: 

Now I ask it to offer bullet points for every of my jobs: 

 

I repeat this prompt for every job, adding a clause to and changing the variety of bullet points requested as described previously.

Finally, once I actually have bullets for every job experience, I ask the LLM to offer a summary of relevant technical skills:

The responses from these questions provide a start line for a resume that ought to require minimal editing—mostly for format and editing out content that could be inaccurate or to make sure the resume suits on one page. So ends the prompt engineering case.

The subsequent step beyond prompt engineering is RAG. RAG allows users to create their very own libraries to function a knowledge base for LLMs to attract from. On this case, the document library consists of already created resumes. Older resumes provide more details on early profession accomplishments. For more moderen job experience, this approach is beneficial after you will have already created a handful of resumes covering the spectrum of your work experience. Constructing a RAG out of your resumes will help focus the LLM on your personal experience base while not having a custom trained or tuned model. RAG isn’t crucial to using LLM to jot down a resume and can incur computational cost, nevertheless it could improve results as in comparison with only prompt engineering and make it easier to offer greater experience for the LLM to attract from.

We’ll use the identical prompts for the RAG cases, except we’ll remove the primary prompt providing work background, because the RAG will provide that. To make use of RAG, we return to the Azure tutorial, this time completing the “Add data to your project” and “Create an index to your data” sections. Nevertheless, as a substitute of using the information provided within the tutorial, create and upload a folder with all of the resumes you want to the LLM to attract from. Once the indexing is complete, follow step 4 of “Test within the index within the playground” so as to add the information to the model’s context. After that, we repeat the prompts used earlier, except removing the primary prompt providing work history.

Finally, to guage resume generation with a distinct foundational model, we deploy a brand new model to the project, this time chat-gpt-4o-mini, to guage its performance with RAG. LLMs have trillions of parameters, requiring enterprise level hosting. Small(er) language models (8 billion for chat-gpt-4o-mini vs 1.8 trillion for chat-gpt-4o) try and provide most of the aptitude of LLMs in a more compact and versatile form factor that supports localized deployment, especially necessary for data security and privacy of smaller firms that won’t have the opportunity to support internal hosting of an LLM. Once the brand new model is deployed, we return to the playground, add the system message and data from the RAG, and repeat the identical prompts as before.

The table below summarizes performance of every case:

Case Groundedness Relevance Coherence Fluency
Prompt Engineering Unacceptable Marginal Acceptable Acceptable
RAG Acceptable Marginal Acceptable Acceptable
RAG-mini Acceptable Marginal Acceptable Acceptable
Summary of case performance across metrics

The prompt engineering resume had substantial grounding issues such that I’d not wish to use it in any respect. It invented certifications I shouldn’t have and dollar amounts for improvements I didn’t make. The 2 RAG resumes were higher, but still had some issues. Chat-gpt-4o was barely more grounded, but still made some mistakes the mini didn’t. Considering the known problems with LLM hallucinations, we should always expect to confirm all statements. All three models were marginal on relevance; they didn’t include several necessary phrases from the job listing; they were in a position to phrase acceptable bullets, but these bullets might be improved by manual editing. The RAG resumes, especially on the complete model, were barely more relevant. All models were acceptable for coherence, though the RAG bullets were more concise. All models provided acceptable written English. If you will have a big portfolio of resumes, it might be value using RAG if you should generate resumes, if only to cut back the likelihood of hallucinations (that’s, assuming you should be honest—a number of the accomplishments the LLM attributed to me were quite impressive!).

Some final thoughts on using LLMs to create resumes. The LLMs provided an excellent start line for resumes, especially for those who are finding it difficult to give you latest ideas or phrasing or need a latest begin to a resume. It is mostly easier to edit a primary draft resume than create a brand new one, so that they might help job applicants in crafting resumes. Nevertheless, I needed existing resume bullets and job experience available for the LLM to attract from. Which means I would like to know learn how to write resume bullets. Writing these bullets is a perishable skill, so I like to recommend you not depend on LLMs to jot down all resumes, especially as you gain latest work experience. Second, I needed to further reduce bullets and choose which of those the LLM provided to maintain; I could have avoided this by asking the LLM to jot down a whole one-page resume as a substitute of proceeding step-by-step but this will have decreased quality (especially relevance) of the responses. Finally, I could have improved responses through the use of live interaction to assist the LLM edit and improve its responses. Nevertheless, I desired to keep conditions as controlled as possible to enhance comparison across the cases.

Using LLMs could also be useful within the resume AI arms race to cut back effort and time for every individual resume, but remember to maintain your skills sharp; the more you let something else do your pondering for you, the less capable you will likely be. Use LLMs to assist edit and get latest phrasing ideas for resumes, not to higher flood job postings with applications. Most significant, construct human connections; a network and connections at an organization is the perfect approach to have your resume reviewed by the human eyes of a hiring manager slightly than screened out by an HR bot. 

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x