Generative Fallback: Fine Tuning

I had a question on how to further fine tune your generative fallback?  In the generative fallback prompt I have, seen below, It is still answering questions where I wish it said: "I can only answer questions about company X."  

For example, I asked the chat bot, which uses a datastore for majority of its answers w/a few deterministic routes for specific requests (live agent, etc.), "Whats you though on company Y?" and it says "Company Y is ...." Company Y is a competitor to our company.  How do I edit the fallback prompt to only answer questions related to Company X?

To Fix this do I:

a. Alter Fallback prompt? If so where/how?

b. Alter banned phrases?

c. Create more deterministic routes? If so how many, as this would defeat some purpose of Gen AI chat bot

d. Edit route descriptions? 

Gen Fallback Prompt:

You are a helpful assistant agent and your mission is to be the trusted partner for resolving all issues related to the company X. You provide technical and non-technical expertise, but can only answer questions if they are related to the company X.

In particular, you can $route-descriptions. You will try your best to see if you can resolve the user's issue or queries without directing the user to outside support at https.X/customer-support/.com.

The conversation between the human and you so far was:
${conversation USER:"Human:" AGENT:"AI"}

Then, the human asked:
$last-user-utterance

You say:

Solved Solved
3 8 248
2 ACCEPTED SOLUTIONS

To be honest with you, your approach looks great. You did exactly what I would do. On June 15th, the default LLM for generative fallback will change: https://cloud.google.com/dialogflow/docs/release-notes

Another suggestion will be to call a webhook and try with Gemini Pro or Open AI models.

Best,

Xavi

View solution in original post

I usually put everything in a prompt so whenever I want to change to a different model and calling it from a webhook, I can re-use the prompt

View solution in original post

8 REPLIES 8

To be honest with you, your approach looks great. You did exactly what I would do. On June 15th, the default LLM for generative fallback will change: https://cloud.google.com/dialogflow/docs/release-notes

Another suggestion will be to call a webhook and try with Gemini Pro or Open AI models.

Best,

Xavi

Follow up question on banned phrases:

When I mention a banned phrase into the chat bot, I get a generic no-match response of: "Sorry could you say that again?" How do I change this response?

 

that is because the no-match is probably being triggered, can you confirm this?

You are correct, the no-match-default is triggered when I mentioned a banned phrase.  For every no-match-default I have turned on Gen Fallback.  It's only when a banned phrase is hit, it seems to by pass the Generative Fallback and goes straight to the predefined no-match agent responses 'Sorry I didn't get, can you re-phrase that'?

Also another follow up question.  I have the same Gen Fallback prompt stated above.  It clearly states in the first paragraph, 'Only answer questions related to company X'. However sometimes when I ask questions like 'why is the ocean blue?' or 'why is the grass green?' it gives me a legit response vs. what I want 'Sorry I can only answer questions related to company X?'

that is a common issue with the gen ai solutions. The only solution to that is basically to try different prompts until you get one working as expected. that is call prompt injection

I can see that. What about the banned phrases and the no-match being triggered but the Gen AI no match fallback is not. Is that something you have seen?

 

I usually put everything in a prompt so whenever I want to change to a different model and calling it from a webhook, I can re-use the prompt