Generative AI is transforming the professional services industry, lifting productivity levels to heights thought unobtainable. Interacting with large language models has become a required core competency. This is best done via prompt engineering. Attend this session and learn this new skill from a machine learning engineer.
Greg Alexander [00:00:15] Hi, everyone. This is Greg Alexander, the host of the Pro Serv podcast, brought to you by Collective 54, the first community dedicated to the boutique professional services industry. On this episode, we’re going to talk about prompt engineering, prompt and engineering. Hopefully, you’re aware of what that term is now since we’re all living in the air era, but if you’re not aware of what that is, we’re going to talk about that and how to leverage it in today’s economy. And we have a great guest who is going to walk us through the basics and then she’ll participate in our member Q&A later on. Her name is Numa Dhamani. Did I say your last name correctly?
Numa Dhamani [00:00:59] Yes.
Greg Alexander [00:01:00] Very good. And she is with Kung Fu AI and is a member of Steven Strauss’s team who is a member of Collective 54. So, Numa, would you please introduce yourself and your firm to the audience?
Numa Dhamani [00:01:16] Yeah. So, hi, I’m Nima, and thank you so much for having me today. I’m a principal machine learning engineer for a boutique consulting firm that focuses on artificial intelligence. And my personal expertise is the natural language and the largely large language model space.
Greg Alexander [00:01:34] Okay. And Numa, I was researching your background before the call, and it’s really it’s rather impressive. Would you mind sharing a little with the audience what your background is?
Numa Dhamani [00:01:47] Yeah. So I have primarily kind of worked in the information worker space. So I’ve done a lot of work around disinformation and misinformation. And then also, like privacy and security.
Greg Alexander [00:02:01] Okay, very good. All right. Well, let’s start with the basics. So what is prompt engineering?
Numa Dhamani [00:02:07] So prompt engineering is really just the practice of structuring and refining prompts to get specific responses from a generative A.I. system. So here your system would be something like chat, chip or Bard. And the prompts are really just a way to interact with these systems where you can help guide the model towards achieving certain types of desired outputs. Okay, So. An effective pump engineering would kind of involve formulating prompts that would clearly communicate what your desired task is. And this can include like detailed instructions or providing context or what you want your output to look like. So you can make sure that we are getting out of the model is kind of aligned with the intention.
Greg Alexander [00:02:55] Okay. Very good. And why is it important to develop the skill of prompt engineering?
Numa Dhamani [00:03:04] Yeah. So it’s if you understand how to do product engineering, it can really help empower you to take advantage of the capabilities of these models for various applications. So you’re going to be able to communicate really complex tasks and requirements to these models, which can help ensure that the generated content and responses really align closely with what your intended purpose for that task was. So just helps you leverage the capabilities of these systems.
Greg Alexander [00:03:33] So is it is it true or false that when I use Chad GPT as an example and the response that comes back is inaccurate, it’s not the model’s fault. It was that I wasn’t clear in my request. Is that true or false?
Numa Dhamani [00:03:51] And so a lot of bit of both. Which which I know is in the best answer. But the model isn’t really designed to be accurate, is designed to be really helpful. You can, however, use strategies to help get more accurate answers so you can give it some factual information. You can do certain things on the back end, or you can hook it up to like sort of databases or something to really get factual information. But you can also ask it to go critique itself sometimes. So if it kind of provides a quote to you and you’re like, I’m not actually sure someone said this, you can be like, Well, can you actually verify that for me? Or can you go double check that response? So it’s a little bit of both where you can craft a prompt to get more accurate responses. So one of the there’s several techniques you can use something with scores of cuts of consistency where you can go ask it the same question like three or four times and see like if it actually gives you like the right answer three or four times, I kind of pick the majority. And and part of it is just the nature of these models, and it’s because they’re probabilistic in nature and aren’t designed to be factual.
Greg Alexander [00:05:07] When you say probabilistic in nature as it relates to an elm. Explain how that works.
Numa Dhamani [00:05:14] Yeah, so a language model is really designed to represent natural language and it’s probabilistic. So it basically generates probabilities for a series of words based on the data trained on the models that we see these days are trained on the entire Internet. They’re trained on crazy amounts of data, like billions and trillions of documents. And the way they work is they actually just predict what the next word would be. Hmm. So they kind of assign. So let’s say the sentence is I am a machine learning and we’re trying to predict the word engineer. It might have probabilities assigned for several words that could fit there. It could happen. Engineer, technologist, practitioner, researcher, and the one that would have the highest probability, which would be the words probably kind of seen the most used in that context. That’s what they will assign.
Greg Alexander [00:06:12] Interesting. You know, I’ll give the listeners an example here on how what I learned from Numa recently has helped me. So there’s a feature in Egypt for called Code Interpreter, and this allows you to load a document. So I loaded a 224 page franchise franchise disclosure document and I asked the. The tool. I said, please summarize this document. And I got back a response and then I said, okay, you are a financial analyst. Please summarize a document. And the summary was so different. You know, it was all around financial matters. And then I said, You are a management consultant specializing in competitive strategy. Summarize the document in a whole different set of things came up. So in that little example, and I just bring the example up to help the listeners who might be new to this. Enriched my experience tremendously, and it made the tool, you know, support the initiative that I was working on that much that much better. So providing context as as Numa likes to say is is very, very helpful. Okay. Who should be using prompt engineering? Is it everybody or is a certain job functions? What are your thoughts on that?
Numa Dhamani [00:07:30] And it’s really anyone who wants to interact with the journey of the AI system. So like any time you’re interacting with it, you are actually writing your engineering a prompt, right? So business leaders can see that, developers can use that content, creators can use it, researcher or students. It’s really anyone who wants to leverage capabilities of the generative system.
Greg Alexander [00:07:51] Okay. And is there a particular time, like when should somebody use this as an early in a project? Late in the project. Across the entire spectrum. What are your thoughts on that?
Numa Dhamani [00:08:01] C I think you can kind of incorporate it into your workflow. Either you can in early, later, kind of throughout. It really depends on what task you want. So you can you can use it for brainstorming purposes. That’s actually a really great tool to kind of go back and forth with to kind of brainstorm, I don’t know, like a blog post or something. So let’s say we’re we’re talking about a blog post. You can use it to kind of brainstorm a blog post. You can ask it to maybe write certain sections of it and you could ask it to refine it for you. You could ask it to, you know, correct certain like word usage kind of throughout as you want. You could ask it for like a title towards the end. You could give it the whole thing and be like, okay, well, now give me a title. What do you think the suitable title would be? So I think there’s ways to kind of be incorporated throughout your workflow. It really just depends on what works best for you. Like if that’s something that is like, useful for you, right?
Greg Alexander [00:08:56] Interesting. So I guess the advice there would be to to try to use it in the workflow at the task level, you know, beginning, middle and end, kind of see how it works for you. That’s really great advice. Where is it used? I am a novice at this and I spend most of my time on my smartphone and therefore I don’t use it as often. But when I’m on my PC, I use it more often. So is that common? Is that uncommon? Like where? Where is it most often used?
Numa Dhamani [00:09:25] I think people do kind of maybe most use it on the PC just because it there aren’t like really great apps right now on, you know, like your iOS app. I guess you could pull it up, but it looks great, but you can really just use it for any sort of specific task. I’ve seen it a lot for generating content and kind of a lot of the writing or customer service tasks which actually work really well if you are using it on IPC. A lot of developers that were coding, myself included, sometimes it can be really great to get like just ask it for like the example of a minimal function of doing something like this or like helping it for using with like if you’re using something like copilot, which kind of passes on the back end. And for those who do not get a copilot is basically is a generative system that helps generate concrete. It’s just what it would be, but kind of of tuned for code. Okay. So what it does is what can be useful is like while you’re typing, it will give you sort of like comments or, you know, like variable names and things which can be very easily kind of incorporated while you’re using it. I think we might come to a point where people will be using it on their phones. It might be integrated with like text messaging and kind of functions like that, like I know inflection they are. So there you can text with it. Mm hmm. Which is their version of catch up. And I think we will kind of start seeing a little bit more of that where it’s you can very easily pull it up and talk to it. But in its infancy right now, a lot of it is, I think, Web browser based.
Greg Alexander [00:11:11] Got it. So for those that are listening that haven’t developed a skill of prompt engineering and after listening to you have been inspired to do so, what advice would you give them?
Numa Dhamani [00:11:23] The best way is just by practicing. You can start with really simple tasks and problems and then gradually move on to more complex ones, which maybe require logic or reasoning or brainstorming or critiquing. And it’s just don’t be afraid to try different problems. Fine with it. Like it’s actually really fun to do.
Greg Alexander [00:11:44] Yeah, I’m surprisingly enjoying myself after I was in Austin spending time with you.
Numa Dhamani [00:11:49] When.
Greg Alexander [00:11:50] I went back and said, All right, you know that it’s okay to to make mistakes and try it. And I found it to be. I had your PowerPoint deck up in front of me and with all the instructions on how to do it, which we’ll go over with the members in a later session. And I was using it like that, and I was really pleased with how intuitive it was.
Numa Dhamani [00:12:10] Yeah, I’m so glad.
Greg Alexander [00:12:12] So, Noomi, you have a book coming out soon. Can you tell us the title of the book? What’s What’s it about? And by the time this airs, it probably will be available. So where, where can people find it?
Numa Dhamani [00:12:23] Yeah. So the work is called Introduction to Generative A.I., and it’ll be published by Manning Publications, and it talks about how you can use large language models up to their potential. And so things like this, but at the same time also tries to build an awareness of the risks and limitations that come with using generative AI technologies. So it kind of outlines the broader economic, social, ethical and legal considerations that you need to think about when you’re using generative A.I.. And it will be out this fall. Right now, you can preorder just on man income, but closer to the release date, it will be on Amazon, Target, Barnes and Noble and some other resellers.
Greg Alexander [00:13:08] Well, congratulations on it. I I’ll be buying a copy and will read it. And thank you for contributing to the body of knowledge by going through the hard work of writing a book. I’ve done that myself. I know how difficult that is. I have to ask, did you use A.I. to write the book?
Numa Dhamani [00:13:26] I did not. There are so there are some examples from JP and Bard and Claude in the book, but that is that is kind of the extent of it.
Greg Alexander [00:13:40] Okay, good, good. So it’s original. All right. Fantastic.
Numa Dhamani [00:13:43] Yeah. Yeah. Original piece.
Greg Alexander [00:13:45] Great. Well, Numa, on behalf of the membership, I really want to thank you for supporting Steven and helping us understand prompt engineering. Really looking forward to the member session. And congratulations again on your book. And thanks for being here.
Numa Dhamani [00:14:01] Thank you. Thank you for having me. This is fun.
Greg Alexander [00:14:03] All right. So a few calls to action for the audience. So if you’re a member, please attend the Q&A session that we’ll have with Numa. Look out for that invitation. If you’re a candidate for membership, go to Collective 54 ICOM and apply and the membership committee will consider your application and get back to you. And if you’re not ready for either of those things, you just want to learn more. I would direct you to my book. It’s called The Boutique How to Start Scale and sell a professional services firm, which you can find on Amazon. So with that, thanks again, Numa and thanks for the audience for listening and we’ll talk to you soon.