Getting your Trinity Audio player ready...

The Monday Morning Rebuild: Stop Answering the Same Questions from Scratch

If you’re running a boutique professional services firm, you’ve probably embraced Greg’s hyper-specialization thesis of winning by going narrow and deep in your market. Pick your niche. Own it. Become the obvious choice.

Here’s something most of us miss: we hyper-specialize our services but leave our operations completely generalist. The founder is still the one holding everything together and doing mental math across a half-dozen tabs, asking the same complex strategic questions every week.

That’s founder dependency at the operational level. And it was holding back our firm.

The Pattern

Every boutique founder has a handful of questions they ask repeatedly. The data to answer them exists somewhere in your systems. Your time tracker has the hours. Your project management tool has the deadlines. Your accounting software has the numbers.

But judgment lives in the founder’s head. What do numbers mean? What does “good” look like? What needs attention right now? You’re the one synthesizing across systems, pattern-matching against experience, and deciding what matters. And you’re doing it from scratch every time.

At January Advisors, the data science consulting firm I run, I started paying attention to which questions I kept asking. Specifically, the operational questions. The ones I was answering on Sunday nights and Monday mornings, pulling up dashboards and spreadsheets and trying to assemble a picture of where the firm actually stood.

I found four questions that I needed to solve right away, and I used AI to do it.

Question 1: What Are We Spending on Technology?

This is the one every founder can relate to. You sign up for tools as you need them. A project management platform here, a design tool there, an API subscription for a client project that never got cancelled. Renewals hit at random times throughout the year. Some are monthly, some annual, some usage-based. Nobody has the full picture, and no one can answer the basic question: what does it cost to run this firm’s technology stack?

We built a technology costs agent that tracks every subscription, flags upcoming renewals, categorizes costs by function, and produces a weekly snapshot of variable technology spend. No surprises. It took a lot of trial and error, working with different APIs and data limitations, but it works. When annual renewals come up, we’ve already budgeted for them. When we evaluate a new tool, we can see what we’re already paying for in that category.

It sounds simple, and it is. That’s the point. The question was always simple. We just never built anything to hold the answer.

Question 2: Who Has Capacity and Are We On Track?

This is the weekly question. Every Monday, we do some version of this: Who’s busy? Who’s available? Are we burning through that project budget too fast? What happens when the engagement that ends next month wraps up?

The inputs are scattered. Hours logged in the time tracker. Deadlines in the project management tool. Budget caps in the contracts. Team availability is often in our heads, because someone somewhere knows that the web development lead is on PTO next week and a new analyst is still ramping up. Those facts change the staffing calculus.

We built a staffing agent that pulls all of this knowledge together into a single weekly view: utilization by person, budget burn rates on active projects, upcoming deadlines, and a capacity forecast that shows what frees up as projects end. It accounts for holidays, PTO, and the difference between total utilization and client-billable utilization.

Before this existed, a few of us would meet every week to discuss capacity. We’d compare the staffing pictures in each of our heads. It worked at 6 people. It stopped working at 10. By 15, it was unsustainable.

Question 3: Did This Project Actually Work?

Most firms know their top-line revenue per project. Very few know their effective rate — what they actually earned per hour when all the scope creep, overruns, and “quick follow-ups” are factored in. Even fewer firms capture what worked and what didn’t in a way that’s useful for the next project.

We built a close-out agent that runs at the end of every significant engagement. It calculates the real economics. The contract value divided by actual hours gives us the effective rate. It compares that to our benchmarks. It captures what went well, what we’d do differently, and what the client relationship looks like going forward.

The important part is the accumulation of knowledge. After doing this across dozens of projects, we know which types of engagements are consistently profitable and which ones we underscope. We know which methods take longer than we estimate. We know which client sectors have the best margins. That pattern recognition used to live in my head. Now it’s out there for everyone.

Question 4: Where Are We Financially — Really?

Not the P&L. Not last quarter’s numbers. The forward-looking picture that keeps the firm going.

This is the question underneath all the other questions: given our active contracts, our pipeline, our team’s capacity, and our burn rate, where are we headed?

We built a finance agent that answers this with data instead of anxiety. It tracks utilization trends over time, forecasts revenue based on active engagements, looks at our pipeline, models scenarios, identifies capacity gaps and surplus, and flags when the numbers don’t match the narrative.

This is also the question that’s hardest to delegate in a traditional firm. The founder has the full context of client relationships, pipeline conversations, team dynamics, and market conditions. No one else can assemble that same picture. Building a finance agent doesn’t replace human judgment, but it gives that judgment something to work with beyond intuition.

The Move

The pattern across all four is the same:

  1. Identify the question you keep asking. Not a task. Something that requires judgment, not just data.
  2. Map where the answers live. Usually scattered across 3-4 systems that don’t talk to each other.
  3. Encode what “good” looks like. This is your judgment. The benchmarks, the thresholds, the “this needs attention” signals.
  4. Make the answer available without you in the loop. So anyone on your team can get to the answer without reconstructing it from scratch.

This is hyper-specialization applied inward. Each system is a specialist that knows one domain deeply. It answers one big question well, using the right data, with the right judgment baked in.

Why This Matters for Scale and Exit

Greg talks about the founder dependency trap: when the business is completely dependent on the founder, it’s a practice, not a firm. Most founders think about this in terms of client relationships and service delivery. Can the team do the work without me?

But operational dependency is just as real. If you’re the only one who knows whether the firm can take on a new project, whether a completed engagement was profitable, or whether you can afford to hire, that’s dependency too. And it’s invisible, because it doesn’t look like work. It looks like “thinking about the business.”

Think of it this way: encoding your operational questions into systems extends your judgment. The firm’s operational intelligence becomes an asset for everyone, not a liability trapped in one person’s head.

Start with the question you asked yourself last Sunday night. That’s your first build.