Home Artificial Intelligence The way to Use LLMs to Generate Concise Summaries Organising the environment Summarizing a paragraph About Author

The way to Use LLMs to Generate Concise Summaries Organising the environment Summarizing a paragraph About Author

10
The way to Use LLMs to Generate Concise Summaries
Organising the environment
Summarizing a paragraph
About Author

Google Cloud - Community
Photo by Aaron Burden on Unsplash

Large language models (LLMs) are a style of artificial intelligence (AI) that could be used to generate text. They’re trained on massive datasets of text, which allows them to learn the nuances of human language and generate text that’s each accurate and natural-sounding.

In recent times, LLMs have been used to generate summaries of text. It is a very useful application, as it might probably help people to quickly and simply understand the important points of an extended piece of text.

On this blog post, we are going to discuss use LLMs to generate concise summaries of text. We might be using Vertex SDK to access the LLM models from Google Cloud. Python might be the programming language of alternative.

In the event you usually are not technical or not accustomed to Python, please take a look at my other blog to start through the user interface. It is so simple as typing a couple of sentences and clicking a button!

To start, you want to have a Google Cloud project. You’ll be able to create a latest project by following these instructions or just use the present one.

Once the project is created, you possibly can either use Vertex Workbench or Google Colab to start. Using your individual environment can be an option, but you will have to setup authentication part yourself.

Don’t forget to enable the required Vertex AI APIs!

In a Jupyter Notebook environment, install the require libraries

! pip install google-cloud-aiplatform --upgrade --user

In the event you are using Google Colab, you wish an extra step to authenticate yourself. It might be easily done by running these 2 blocks of code.

from google.colab import auth

# select a google account the related to the cloud project
auth.authenticate_user()

from google.cloud import aiplatform

aiplatform.init(
project="your-project-id",
location="your-project-location",
)

Now we just must import the model and initialize it. The next code does that.

from vertexai.preview.language_models import TextGenerationModel

text_generation_model = TextGenerationModel.from_pretrained("text-bison@001")

In the event you can successfully run the code above, you’re good to proceed with the remainder of this tutorial.

With the intention to summarize a paragraph using a LLMs, we want to cover a couple of basic concepts about LLMs. LLMs takes in something called , an input text (e.g. an issue), and produces responses (output text) based on the structure of the prompt.

To check whether the text generation model is working as intended, try running this code.

prompt = "What's prompt design?"
answer = text_generation_model.predict(prompt, max_output_tokens=1024).text

print(answer)

You need to receive a response from the model that answers your query about prompt design. Now, let’s attempt to summarize the reply.

To summarize a paragraph, you just must add “ ” as a prefix.

prompt = "Summarize this text: " + answer
summary = text_generation_model.predict(prompt, max_output_tokens=1024).text

print(summary)

How is the summary? In the event you usually are not pleased with the summary, you possibly can try editing the prefix. Let’s try as a substitute.

prompt = "Provide a concise summary of this text: " + answer
summary = text_generation_model.predict(prompt, max_output_tokens=1024).text

print(summary)

Noticed any changes? You can even try asking the model to supply the summary in bullet points format. Just add

prompt = "Provide a concise summary of this text: " + answer + "Provide the summary in bullet points"
summary = text_generation_model.predict(prompt, max_output_tokens=1024).text

print(summary)

Take a take a look at the prompt variable, it’s getting messier as you add additional information to it. It might cause undesired consequence corresponding to mixing up instructions and the input. With the intention to prevent it from happening, we are able to make use of prompt template.

A prompt template is a pre-defined structure which you could use to supply instructions to the LLM. Let’s create one and use it right away!

prompt_template = """
Provide a concise summary of the triple backquoted text.

```{text}```

Provide the summary in bullet points.
"""

prompt = prompt_template.format(text=answer)
summary = text_generation_model.predict(prompt, max_output_tokens=1024).text

print(summary)

To this point, so good? We will reuse the identical template to summarize other paragraphs too. While doing that, when you attempt to summarize a big document, you would possibly encounter a token limit imposed by the model.

To summarize a big document, we must be use LLMs creatively. Well, that calls for a sequel blog.

If you ought to learn more about text summarization using LLMs on Google Cloud, visit our GitHub repository for a comprehensive list of resources.

10 COMMENTS

  1. … [Trackback]

    […] Read More: bardai.ai/artificial-intelligence/the-way-to-use-llms-to-generate-concise-summariesorganising-the-environmentsummarizing-a-paragraphabout-author/ […]

  2. … [Trackback]

    […] Info on that Topic: bardai.ai/artificial-intelligence/the-way-to-use-llms-to-generate-concise-summariesorganising-the-environmentsummarizing-a-paragraphabout-author/ […]

  3. It’s a shame you don’t have a donate button! I’d without a doubt donate
    to this brilliant blog! I guess for now i’ll settle
    for book-marking and adding your RSS feed to my Google account.
    I look forward to fresh updates and will talk about this website with my Facebook group.
    Talk soon!

LEAVE A REPLY

Please enter your comment!
Please enter your name here