Change pre-defined responses from Datastore Agent?

Problem Statement: When I ask the datastore chatbot certain types of questions such as:

Do you like people? Who is your mother? How old are you?

The no-match is triggered,  Gen AI fallback is triggered, but I get this same response for each of the questions:

'I am powered by PaLM 2, which stands for Pathways Language Model 2, a large language model from Google AI.'

Does anyone know how to change this response?

5 7 225
7 REPLIES 7

Hi, you can customize the fallback prompt in the Agent Gen AI settings. You can try some short prompts like:

"never say that you are a large language model. You are a useful chatbot that helps XYZ"

Also, I tried that saying in the Gen AI fallback prompt.

'Never say, 'I am powered by PaLM 2, which stands for Pathways Language Model 2, a large language model from Google AI.'

And when I ask those odd questions again, I get the same response even though I said don't say them

the problem with this feature is that I think they have a higher-level prompt on top of the custom prompt that you can tweak I think.

Based on your experience with Gen AI Fallback is there a best practice when building it out? When is the prompt too long?  Right now I keep adding to it like ' dont say this' , 'no negative comments' , 'only answer question that are realted to company x'. 

This prompt is getting larger each day  

if this feature does not fit your needs, just call a webhook with the same prompt and the model you want. I am pretty sure you have better outputs. As per the prompt size, no worries, it is normal. If you call a webhook and use there langchain or the vertex gen ai SDK, you can set some parameters to tweak this "banned phrases"

When you say call a webhook, do you mean to access a larger LLM model and then ask the question?  And to confirm, you are saying instead of calling Gen AI fallback, have no-match connect to webhook, which connects to better LLM model where I put the prompt there instead?  I was gonna connect to Gemini but it seems counter-intuitive in that I am leaving a google LLM prompt in Dialogflow to connect to another google LLM?

If so could you share an example on how to do that? 

You said there is a 'higher level prompt' on top of the Gen AI fallback prompt. Do you know how to access this? I'd prefer to make changes in Dialogflow vs. webhook it out

 

yes, what I am saying is to connect the no-match intent to a webhook that has the prompt + the call to another model (Gemino or OpenAI one).

You cannot access the higher-level prompts.