The one thing that is clear about LLMs like ChatGPT is that you can spend a lot more time building prompts than you do building code. And because prompts are a crucial component of any GPT project, you need to deal with them as if they were code modules1. The more efficient our tools are at working with them, the more efficient we’ll be as AI developers.
Although we can embed our prompts in other assets (e.g., inside of OmniScripts), it’s probably best to pull them out into standalone files if we want to track their evolution and control their releases. And doing that enables us to build tools for editing, testing, and revising them in isolation.
In the case of Enterprise Applications (like Salesforce), odds are high that we’ll also need to merge live data from the CRM (or other) system into a prompt in order to elicit a contextually appropriate response. For example, if you want to generate the text for an email that will include an attached quote, you’ll want to merge in information about the account, contact, sales rep, and opportunity into the prompt so that the email is pertinent. Whatever tool we use to edit our prompts needs to know how to do that merging.
Fortunately, building a good-enough starter tool isn’t that hard and, last week, I shared a Google Colab notebook as an initial version of such a tool:
It is a bit monolithic, but it’s easy to find where you need to edit your template(s) and give them another test. Let me walk through the design in a bit of detail on how it works with templates and OpenAI, and use it as a means of discussion what we need in a tool2.
Startup
The first step is to load the packages needed by the program. Because Google doesn’t have OpenAI automatically available we have to tell it to install the library before we get rolling. Quickly: jinja2 is a templating system (more on that in a moment); json is, well, json; open is a library that lets us call OpenAI; textwrap helps us format the response; time allows us to track how long the call to OpenAI takes; and zipfile and files allow us to download the finished templates.
To paraphrase Chekhov3, if we show a library in act 1, it will be invoked in act 2.
The next section is going to collect all the inputs needed to call Open AI. The first section of this solicits your OpenAI API key, which you can either enter at runtime (so it will quickly be forgotten) or hard code it into the script. We’ve talked a bit about security around your keys, so you know this is a concern. Normally, in Python, you stick the API key in the environment because … it’s not Python’s problem anymore? … but it’s not as easy to do that when the Notebook is hosted by Google4. So, either you suck it up and embed your key in the code or you do what I did, which is read the key in from the user each time you run it. Ugh, not great choices, but for the benefit of making this easy to share I will put up with it.
The second section is your Data JSON from your OmniScript, captured at the point where you would call OpenAI. The triple quotes (“““
) delimit the start the end of the string, and allow you to paste in the data JSON directly without concern for formatting or escaping data. While you’re in preview mode you can press the “copy” button to capture the current Data JSON:
Then you can paste it in to the Notebook as is. This part is pretty easy, as long as you delete the previous data json completely and don’t accidentally erase one of the triple-quotes (or damage it). The triple quote gets you multi-line strings is one of those Python features I really miss in other languages.
This section allows you to create and edit the two prompts you’ll need: system and user. From the perspective of ChaGPT, the system prompt tells it what you generally expect from it while the user prompt is the specific ask5. You can see in these example prompts that there is no merging of data into the system prompt6, but the user prompt is filled with data from the OmniScript.
The next section, Execution, consists of two parts. The first actually merges data from the Data JSON into the system and user templates (and prints the merged results out in case that’s helpful). We’re using a simple templating system called Jinja2 to do the heavy lifting of merging the Data JSON into the templates: we just give it the text of the template and then the data structure, and it outputs the finished prompt for us!
The second section makes the actual call to ChatGPT, passing in the prompts, and then prints out the results.
If the results are unsatisfying, you can go back to the prompt editing section, revise your prompts, rerun just that section and then rerun execution section to try again. This is a bit strange compared to normal problems — being able to rerun sections of code out of order — but is part of the appeal of Python notebooks.
The final section of the notebook offers you the opportunity to download the finished templates for importing into Salesforce. If you answer Y, then it generates and downloads a small .zip file you can use. It generates the “resource-meta” files you need if you’re going to copy them over to Visual Studio Code.
Hopefully you find this approach useful for refining your templates. All this does leave a mystery, though: how will we use these odd looking Jinja2 templates in Salesforce?
That will be answered on Thursday!
Don’t be left hanging on the cliff forever! Subscribe to find out what happens next!
But do leave your friends hanging …
They kind of are.
Or a future, better tool…
I just discovered that the Star Trek one is a Chekov not a Chekhov. So the “h” means it’s the playwright Anton I’m referring to: https://en.wikipedia.org/wiki/Chekhov%27s_gun
The way you set environment variables is to include code to set them, which kind of defeats the purpose here…
To be clear, you can merge data in, there was just no reason to.