Skip to content

pgml chat with history + additional functionality #1047

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 14 commits into from
Oct 24, 2023

Conversation

santiatpml
Copy link
Contributor

  • Chat history in a separate collection
  • Keeps track of users, conversations, and interfaces
  • Prompt engineering on system and base prompts to follow specific instructions
  • Options for language, personality, programming language etc.

@@ -11,14 +11,21 @@ packages = [{include = "pgml_chat"}]
python = ">=3.8,<4.0"
openai = "^0.27.8"
rich = "^13.4.2"
pgml = "^0.9.0"
pgml = {version = "0.9.4", source = "testpypi"}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we wait for a release?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm planning to push pgml-chat to pypi after pgml 0.9.4 is released to pypi. In the meantime, want to update master.

#.idea/

pgml_chat/pgml_playground.py
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now I'm curious what's in here.

messages = []
messages.append({"role": "system", "content": system_prompt})

chat_history_messages = await chat_collection.get_documents( {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An implementation more inline with the thesis would combine the first call with this call in a CTE, and follow up with the next call all in 1.

async def generate_response(
messages, openai_api_key, temperature=0.7, max_tokens=256, top_p=0.9
):
openai.api_key = openai_api_key
log.debug("Generating response from OpenAI API: " + str(messages))
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
# model="gpt-3.5-turbo-16k",
model="gpt-4",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Who's paying for this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OPENAI_API_KEY is an environment variable. User pays for it.

WHERE metadata @> '{\"interface\" : \"cli\"}'::JSONB
AND metadata @> '{\"role\" : \"user\"}'::JSONB
OR metadata @> '{\"role\" : \"assistant\"}'::JSONB
ORDER BY metadata->>'timestamp' DESC LIMIT %d""" % (
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is the timestamp in the metadata, instead of a field on the record?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have a field on the record. I insert user message and the response together and each of them will have different timestamps. I could also order by the field on the record.

@@ -0,0 +1,114 @@
from pgml import Collection, Builtins, Pipeline
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see what's in the playground now 👀

@santiatpml santiatpml merged commit dcbc7d4 into master Oct 24, 2023
@santiatpml santiatpml deleted the santi-pgml-chat-multi-turn branch October 24, 2023 19:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants