Introducing txtchat, next-generation conversational search and workflows for all Why open-source? Architecture Examples Connect your individual data Wrapping up

-

Talk along with your data and see what you learn

Before covering how txtchat works, let’s cover the why. Why construct this method? Why open-source?

The next videos display how txtchat works. These videos run a series of queries with the Wikitalk persona. Wikitalk is a mix of a Wikipedia embeddings index and a LLM prompt to reply questions.

History

Conversation with Wikitalk about history.

Culture

Arts and culture questions.

Science

Let’s quiz Wikitalk on science.

Summary

Not all workflows need a LLM. There are many great small models available to perform a selected task. The summary persona simply reads the input URL and summarizes the text.

Mr. French

Just like the summary persona, Mr. French is an easy persona that translates input text to French.

The workflow definitions for the examples above may be present in the txtchat-personas model repository.

path: /tmp/hn
embeddings:
path: sentence-transformers/all-MiniLM-L6-v2
content: true
tabular:
idcolumn: url
textcolumns:
- title
workflow:
index:
tasks:
- batch: false
extract:
- hits
method: get
params:
tags: null
task: service
url: https://hn.algolia.com/api/v1/search?hitsPerPage=50
- motion: tabular
- motion: index
writable: true
from txtai.app import Application

app = Application("index.yml")
list(app.workflow("index", ["front_page"]))

path: /tmp/hn
writable: false

extractor:
path: google/flan-t5-xl

workflow:
search:
tasks:
- txtchat.prompt.Query
- extractor

python -m txtchat.agent query.yml
Query result for Hacker News Embeddings index

ASK DUKE

What are your thoughts on this topic?
Let us know in the comments below.

5 COMMENTS

0 0 votes
Article Rating
guest
5 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

5
0
Would love your thoughts, please comment.x
()
x