Understanding Employee Stock Options in Boutique Professional Service Firms

Understanding Employee Stock Options in Boutique Professional Service Firms

When the topic of employee stock options arises, thoughts usually gravitate towards tech startups and Silicon Valley’s golden handcuffs. However, the world of boutique professional service firms has its own unique landscape. In these firms, the granting of stock options is not common practice. Yet, in specific circumstances, they can provide valuable incentive and alignment between professionals and the firm’s objectives. This article delves into the basics of employee stock options within this niche, explaining why they’re less prevalent and the key considerations when they are implemented.

Why Stock Options are Rare in Professional Service Firms

Professional service firms, such as consulting, marketing agencies, and IT firms, are traditionally structured around partnership models. In these models, senior professionals work their way up the ranks and eventually buy into the partnership, sharing in profits rather than owning shares that appreciate in value. The unpredictability of client-driven revenues, coupled with a lack of scalable products, makes these firms less conducive to the traditional stock option model seen in product-based or tech companies. Furthermore, the valuation of professional service firms is often based on intangibles like client relationships and human capital, which are more challenging to quantify and forecast compared to tangible assets or predictable revenue streams.

Where Stock Options Make Sense

Despite the traditional partnership model, there are scenarios where stock options in boutique professional service firms can be beneficial. They can attract top-tier talent, incentivize long-term commitment, or facilitate succession planning. Especially in smaller, specialized firms where the expertise of a few individuals can significantly impact the firm’s value, stock options can create alignment between individual and company success.

Key Items to Consider:

    1. Number of Shares in the Pool: For boutique professional service firms considering stock options, it’s typical to allocate 15-20% of the firm’s total shares for the option pool. This ensures there’s a meaningful reward for employees without excessively diluting existing ownership.

    2. Exercise Price and Valuation: The exercise price is the cost an employee will pay to convert their option into an actual share. To avoid tax complications and ensure fairness, this price should equal the share’s fair market value at the grant date. Given the intangible assets in professional service firms, determining this valuation may require expert assistance.

    3. Type of Option: Options come in various forms, including Incentive Stock Options (ISOs), Non-Qualified Stock Options (NSOs), and Restricted Stock. Each has its tax implications, benefits, and constraints, so it’s essential to choose the one that aligns best with both the firm’s and employee’s goals.

    4. Duration: The maximum duration for most stock options is 10 years, after which they expire. However, if an employee owns more than 10% of the firm, this reduces to 5 years. This encourages timely exercise and prevents indefinite uncertainty in ownership structure.

    5. Permissible Forms of Payment: When employees exercise their options, they can do so using cash, by surrendering other shares (net of exercise price), through cashless exercises, or even via promissory notes. The firm needs to define and communicate acceptable payment methods.

    6. Vesting and Early Exercise: Vesting schedules determine when options can be exercised. A common approach sees 0% vested in the first year (a one-year cliff), 25% vested at the end of year one, and then pro-rata monthly vesting up to the end of year four. This structure incentivizes longer-term commitment.

    7. Restrictions on the Transfer of Shares: Even after options are exercised, firms often retain some control over the shares. A common restriction is the “first right of refusal,” which requires the employee to offer the shares back to the firm or existing shareholders before selling to an outside party. This ensures the firm’s ability to maintain its ownership structure.

In conclusion, while stock options are not the norm in boutique professional service firms, they can be a valuable tool in certain circumstances. It’s crucial for firms considering this route to understand the unique challenges and considerations in their industry and design an option plan that aligns with their strategic objectives.

If you find this article helpful, come join us at Collective 54. Apply here.

Episode 148 – Prompt Engineering: A New Skill That Professional Service Firms Need to Learn – Member Case by Stephen Straus and Numa Dhamani

Generative AI is transforming the professional services industry, lifting productivity levels to heights thought unobtainable. Interacting with large language models has become a required core competency. This is best done via prompt engineering. Attend this session and learn this new skill from a machine learning engineer.

TRANSCRIPT

Greg Alexander [00:00:15] Hi, everyone. This is Greg Alexander, the host of the Pro Serv podcast, brought to you by Collective 54, the first community dedicated to the boutique professional services industry. On this episode, we’re going to talk about prompt engineering, prompt and engineering. Hopefully, you’re aware of what that term is now since we’re all living in the air era, but if you’re not aware of what that is, we’re going to talk about that and how to leverage it in today’s economy. And we have a great guest who is going to walk us through the basics and then she’ll participate in our member Q&A later on. Her name is Numa Dhamani. Did I say your last name correctly? 

Numa Dhamani [00:00:59] Yes. 

Greg Alexander [00:01:00] Very good. And she is with Kung Fu AI and is a member of Steven Strauss’s team who is a member of Collective 54. So, Numa, would you please introduce yourself and your firm to the audience? 

Numa Dhamani [00:01:16] Yeah. So, hi, I’m Nima, and thank you so much for having me today. I’m a principal machine learning engineer for a boutique consulting firm that focuses on artificial intelligence. And my personal expertise is the natural language and the largely large language model space. 

Greg Alexander [00:01:34] Okay. And Numa, I was researching your background before the call, and it’s really it’s rather impressive. Would you mind sharing a little with the audience what your background is? 

Numa Dhamani [00:01:47] Yeah. So I have primarily kind of worked in the information worker space. So I’ve done a lot of work around disinformation and misinformation. And then also, like privacy and security. 

Greg Alexander [00:02:01] Okay, very good. All right. Well, let’s start with the basics. So what is prompt engineering? 

Numa Dhamani [00:02:07] So prompt engineering is really just the practice of structuring and refining prompts to get specific responses from a generative A.I. system. So here your system would be something like chat, chip or Bard. And the prompts are really just a way to interact with these systems where you can help guide the model towards achieving certain types of desired outputs. Okay, So. An effective pump engineering would kind of involve formulating prompts that would clearly communicate what your desired task is. And this can include like detailed instructions or providing context or what you want your output to look like. So you can make sure that we are getting out of the model is kind of aligned with the intention. 

Greg Alexander [00:02:55] Okay. Very good. And why is it important to develop the skill of prompt engineering? 

Numa Dhamani [00:03:04] Yeah. So it’s if you understand how to do product engineering, it can really help empower you to take advantage of the capabilities of these models for various applications. So you’re going to be able to communicate really complex tasks and requirements to these models, which can help ensure that the generated content and responses really align closely with what your intended purpose for that task was. So just helps you leverage the capabilities of these systems. 

Greg Alexander [00:03:33] So is it is it true or false that when I use Chad GPT as an example and the response that comes back is inaccurate, it’s not the model’s fault. It was that I wasn’t clear in my request. Is that true or false? 

Numa Dhamani [00:03:51] And so a lot of bit of both. Which which I know is in the best answer. But the model isn’t really designed to be accurate, is designed to be really helpful. You can, however, use strategies to help get more accurate answers so you can give it some factual information. You can do certain things on the back end, or you can hook it up to like sort of databases or something to really get factual information. But you can also ask it to go critique itself sometimes. So if it kind of provides a quote to you and you’re like, I’m not actually sure someone said this, you can be like, Well, can you actually verify that for me? Or can you go double check that response? So it’s a little bit of both where you can craft a prompt to get more accurate responses. So one of the there’s several techniques you can use something with scores of cuts of consistency where you can go ask it the same question like three or four times and see like if it actually gives you like the right answer three or four times, I kind of pick the majority. And and part of it is just the nature of these models, and it’s because they’re probabilistic in nature and aren’t designed to be factual. 

Greg Alexander [00:05:07] When you say probabilistic in nature as it relates to an elm. Explain how that works. 

Numa Dhamani [00:05:14] Yeah, so a language model is really designed to represent natural language and it’s probabilistic. So it basically generates probabilities for a series of words based on the data trained on the models that we see these days are trained on the entire Internet. They’re trained on crazy amounts of data, like billions and trillions of documents. And the way they work is they actually just predict what the next word would be. Hmm. So they kind of assign. So let’s say the sentence is I am a machine learning and we’re trying to predict the word engineer. It might have probabilities assigned for several words that could fit there. It could happen. Engineer, technologist, practitioner, researcher, and the one that would have the highest probability, which would be the words probably kind of seen the most used in that context. That’s what they will assign. 

Greg Alexander [00:06:12] Interesting. You know, I’ll give the listeners an example here on how what I learned from Numa recently has helped me. So there’s a feature in Egypt for called Code Interpreter, and this allows you to load a document. So I loaded a 224 page franchise franchise disclosure document and I asked the. The tool. I said, please summarize this document. And I got back a response and then I said, okay, you are a financial analyst. Please summarize a document. And the summary was so different. You know, it was all around financial matters. And then I said, You are a management consultant specializing in competitive strategy. Summarize the document in a whole different set of things came up. So in that little example, and I just bring the example up to help the listeners who might be new to this. Enriched my experience tremendously, and it made the tool, you know, support the initiative that I was working on that much that much better. So providing context as as Numa likes to say is is very, very helpful. Okay. Who should be using prompt engineering? Is it everybody or is a certain job functions? What are your thoughts on that? 

Numa Dhamani [00:07:30] And it’s really anyone who wants to interact with the journey of the AI system. So like any time you’re interacting with it, you are actually writing your engineering a prompt, right? So business leaders can see that, developers can use that content, creators can use it, researcher or students. It’s really anyone who wants to leverage capabilities of the generative system. 

Greg Alexander [00:07:51] Okay. And is there a particular time, like when should somebody use this as an early in a project? Late in the project. Across the entire spectrum. What are your thoughts on that? 

Numa Dhamani [00:08:01] C I think you can kind of incorporate it into your workflow. Either you can in early, later, kind of throughout. It really depends on what task you want. So you can you can use it for brainstorming purposes. That’s actually a really great tool to kind of go back and forth with to kind of brainstorm, I don’t know, like a blog post or something. So let’s say we’re we’re talking about a blog post. You can use it to kind of brainstorm a blog post. You can ask it to maybe write certain sections of it and you could ask it to refine it for you. You could ask it to, you know, correct certain like word usage kind of throughout as you want. You could ask it for like a title towards the end. You could give it the whole thing and be like, okay, well, now give me a title. What do you think the suitable title would be? So I think there’s ways to kind of be incorporated throughout your workflow. It really just depends on what works best for you. Like if that’s something that is like, useful for you, right? 

Greg Alexander [00:08:56] Interesting. So I guess the advice there would be to to try to use it in the workflow at the task level, you know, beginning, middle and end, kind of see how it works for you. That’s really great advice. Where is it used? I am a novice at this and I spend most of my time on my smartphone and therefore I don’t use it as often. But when I’m on my PC, I use it more often. So is that common? Is that uncommon? Like where? Where is it most often used? 

Numa Dhamani [00:09:25] I think people do kind of maybe most use it on the PC just because it there aren’t like really great apps right now on, you know, like your iOS app. I guess you could pull it up, but it looks great, but you can really just use it for any sort of specific task. I’ve seen it a lot for generating content and kind of a lot of the writing or customer service tasks which actually work really well if you are using it on IPC. A lot of developers that were coding, myself included, sometimes it can be really great to get like just ask it for like the example of a minimal function of doing something like this or like helping it for using with like if you’re using something like copilot, which kind of passes on the back end. And for those who do not get a copilot is basically is a generative system that helps generate concrete. It’s just what it would be, but kind of of tuned for code. Okay. So what it does is what can be useful is like while you’re typing, it will give you sort of like comments or, you know, like variable names and things which can be very easily kind of incorporated while you’re using it. I think we might come to a point where people will be using it on their phones. It might be integrated with like text messaging and kind of functions like that, like I know inflection they are. So there you can text with it. Mm hmm. Which is their version of catch up. And I think we will kind of start seeing a little bit more of that where it’s you can very easily pull it up and talk to it. But in its infancy right now, a lot of it is, I think, Web browser based. 

Greg Alexander [00:11:11] Got it. So for those that are listening that haven’t developed a skill of prompt engineering and after listening to you have been inspired to do so, what advice would you give them? 

Numa Dhamani [00:11:23] The best way is just by practicing. You can start with really simple tasks and problems and then gradually move on to more complex ones, which maybe require logic or reasoning or brainstorming or critiquing. And it’s just don’t be afraid to try different problems. Fine with it. Like it’s actually really fun to do. 

Greg Alexander [00:11:44] Yeah, I’m surprisingly enjoying myself after I was in Austin spending time with you. 

Numa Dhamani [00:11:49] When. 

Greg Alexander [00:11:50] I went back and said, All right, you know that it’s okay to to make mistakes and try it. And I found it to be. I had your PowerPoint deck up in front of me and with all the instructions on how to do it, which we’ll go over with the members in a later session. And I was using it like that, and I was really pleased with how intuitive it was. 

Numa Dhamani [00:12:10] Yeah, I’m so glad. 

Greg Alexander [00:12:12] So, Noomi, you have a book coming out soon. Can you tell us the title of the book? What’s What’s it about? And by the time this airs, it probably will be available. So where, where can people find it? 

Numa Dhamani [00:12:23] Yeah. So the work is called Introduction to Generative A.I., and it’ll be published by Manning Publications, and it talks about how you can use large language models up to their potential. And so things like this, but at the same time also tries to build an awareness of the risks and limitations that come with using generative AI technologies. So it kind of outlines the broader economic, social, ethical and legal considerations that you need to think about when you’re using generative A.I.. And it will be out this fall. Right now, you can preorder just on man income, but closer to the release date, it will be on Amazon, Target, Barnes and Noble and some other resellers. 

Greg Alexander [00:13:08] Well, congratulations on it. I I’ll be buying a copy and will read it. And thank you for contributing to the body of knowledge by going through the hard work of writing a book. I’ve done that myself. I know how difficult that is. I have to ask, did you use A.I. to write the book? 

Numa Dhamani [00:13:26] I did not. There are so there are some examples from JP and Bard and Claude in the book, but that is that is kind of the extent of it. 

Greg Alexander [00:13:40] Okay, good, good. So it’s original. All right. Fantastic. 

Numa Dhamani [00:13:43] Yeah. Yeah. Original piece. 

Greg Alexander [00:13:45] Great. Well, Numa, on behalf of the membership, I really want to thank you for supporting Steven and helping us understand prompt engineering. Really looking forward to the member session. And congratulations again on your book. And thanks for being here. 

Numa Dhamani [00:14:01] Thank you. Thank you for having me. This is fun. 

Greg Alexander [00:14:03] All right. So a few calls to action for the audience. So if you’re a member, please attend the Q&A session that we’ll have with Numa. Look out for that invitation. If you’re a candidate for membership, go to Collective 54 ICOM and apply and the membership committee will consider your application and get back to you. And if you’re not ready for either of those things, you just want to learn more. I would direct you to my book. It’s called The Boutique How to Start Scale and sell a professional services firm, which you can find on Amazon. So with that, thanks again, Numa and thanks for the audience for listening and we’ll talk to you soon.