A problem well stated is a problem half-solved

Lately, I’ve been reflecting on how we can prevent “things from getting worse” in a variety of settings — at work, within our families, and even more broadly in society. I’ve come to believe that what truly matters is our ability to course correct: to adjust and improve situations once we understand that they are deteriorating.

The difficulty, of course, lies precisely in that word — understand. Recognizing that things are getting worse is often far from easy. In many cases, it is hard even to articulate clearly what we mean when we say that something is “not going in the right direction.” And in most environments, it can be genuinely difficult to speak up and say, “Something is not right here.”

This is what brings to mind the quote often attributed to Charles Kettering, former head of research at General Motors: “A problem well stated is a problem half solved.” There is considerable wisdom in that statement, particularly when dealing with complex issues and when trying to mobilize groups of people toward meaningful course correction.

Why do I say this? Fundamentally because, in my experience, when problems or opportunities are not well stated, a host of negative dynamics tend to emerge.

  • People begin to adapt to problems instead of solving them — a powerful driver of many organizational and social failures.
  • A lack of clarity makes it difficult for competent individuals to take the lead.
  • Those who have adapted successfully to a flawed situation often resist change, even when the overall impact is negative.
  • Morale and energy decline as conditions worsen and collaboration becomes harder.

Within the limits of this post, I want to share some thoughts on what can help prevent this outcome. In particular, I want to highlight one family of tools that humans have developed to state problems with exceptional clarity: quantitative models. To be clear, the core point is not about mathematics per se, but about fostering clarity of language and transparency in order to enable course correction. Quantitative models are simply a particularly powerful way to achieve that.

Quantitative models: well-stating problems the hard way

Quantitative models are not always available — after all, they require measurable quantities — but when they are, they are remarkably effective. They force assumptions to be explicit, make key trade-offs visible, and provide a shared and precise language that greatly facilitates collaboration. It is no coincidence that the extraordinary progress of physics, chemistry, and engineering from the 16th century onward coincided with the widespread adoption of mathematical modeling.

In some cases, such as physics, we are even able to state incredibly complex problems with great precision without fully understanding them. Quantum mechanics is a striking example: we can formulate models that answer factual questions with astonishing accuracy, even when their interpretation in plain language remains deeply contested.

Another major benefit of mathematical models is that they uncover relationships that would otherwise remain hidden. A simple example illustrates this.

Suppose you run a sales effort in which you provide services at a loss, hoping that a fraction of prospects will eventually convert to paid customers. You would like to understand how to balance this investment in order to maximize profit. Assume that you are sophisticated enough to have an estimate of the probability of conversion for different prospect profiles.

A natural question arises: up to what probability of conversion should we be willing to provide services at a loss?

At a high level, the answer is intuitive: the gains from converted prospects must offset the losses from those who do not convert. In quantitative terms, this can be written as:

Conv\% \cdot (ConvValue-Investment) \ge (1-Conv\%) \cdot Investment

Solving this inequality at breakeven yields the minimum conversion probability required for profitability:

Conv\% = \frac{Investment}{ConvValue}

This expression immediately clarifies several things. Profitability depends only on the investment per prospect and the value of a converted customer. And if the investment exceeds the conversion value, there is simply no viable business.

A less obvious insight concerns sensitivity. The conversion threshold depends on investment and conversion value in exactly opposite relative terms: increasing investment by 10% raises the required conversion rate by 10%, while increasing customer value by 10% lowers the threshold by 10%. This kind of elasticity-based reasoning is extremely hard to see without writing the problem down explicitly.

Of course, this model is simplified. In practice, conversion rates often depend on investment — for example, offering a richer free trial may increase the likelihood of conversion. At first glance, this seems to make the problem much harder: the conversion rate depends on investment, but investment decisions depend on the conversion rate.

Yet writing this down actually simplifies the situation. If conversion probability is a function of Conv\%(Investment), profitability requires

Conv\%(Investment) \ge \frac{Investment}{ConvValue}

Rather than a fixed threshold, we now have a relationship defining a region of profitability. Far from being an obstacle, this opens the door to optimization: by segmenting prospects by expected value, we can refine investment levels and improve outcomes.

This is a general principle in quantitative modeling: relationships between variables may complicate the mathematics, but they expand the space of possible strategies.

From thresholds to overall profits

So far, the discussion has focused on whether to pursue a given prospect segment. But what about overall profitability once we act?

If conversion rates are not easily influenced by investment, total profits can be written as:

Profits = MarketSize \cdot ( P(Converting)\cdot(ConvValue - Investment) - P(Not\ Converting)\cdot Investment )

Suppose the baseline conversion rate is 30% and the long-term value of a converted customer is three times the investment. Plugging in the numbers yields a loss: on average, each prospect served generates a loss equal to 10% of the investment.

At this point, several strategic levers are available: improve the product without raising costs, improve it while raising costs but increasing conversion, reduce free-trial costs, or develop targeting models to focus on prospects more likely to convert.

How should a team — say, a group of founders with very different backgrounds — decide which lever to prioritize? Disagreement is inevitable, and mistakes will be made. This is precisely why course correction matters, and why developing a precise language around the problem is so important.

Consider targeting. Suppose we segment the market into two equal-sized groups: Segment A with a 40% conversion rate, and Segment B with a 20% conversion rate. Targeting only Segment A yields positive profits — a substantial improvement driven by a very rough segmentation (equivalent to a two-bin scorecard with a Gini of roughly 23%). See below:

SegmentAProfits=\frac{1}{2}\cdot MarketSize\cdot(40\%\cdot(3\cdot Investment-Investment)-60\%\cdot Investment)=\frac{1}{2}\cdot MarketSize\cdot(20\%\cdot Investment)

With further work, we could address questions such as: how valuable is improving targeting further? How does that compare with reducing free-trial costs or increasing customer lifetime value? Quantitative models allow us to ask — and answer — these questions systematically.

Clarity, knowledge, and course correction

One might object that quantitative models are difficult for many people to understand, and therefore limit broad participation in decision-making. This is a fair concern. But clarity is never free. Whether expressed mathematically or otherwise, precision requires effort.

Course correction depends on acquiring and applying new knowledge, and conversations about knowledge are rarely easy. We cannot hope to improve conversion through product enhancements without learning what users value most — and learning often requires time, attention, and risk. As Feynman put it, we must “pay attention” at the very least.

Recognizing knowledge, applying it, and revising beliefs accordingly is hard, even for experts. A well-known anecdote from Einstein’s career illustrates this. After developing general relativity, Einstein initially concluded — incorrectly — that gravitational waves did not exist. His paper was rejected due to a mistake, which he initially resisted. Yet within a year, through discussion and correction, he recognized the error and published a revision.

Even giants stumble. Progress depends not on being right the first time, but often on being willing — and able — to correct course.

Recommended book on how powerful quantiative models can be:

What is the value of Machine Learning? – A stock trading example

Since the days of the big data related buzz words, now over 20 years ago, data driven changes to business models, product values, customer journeys and more have gone through massive enhancements.

Nevertheless in the early days there were several discussions about the supposed “value” of data and such transformations. I know it because I was there, in most cases on the selling side.

It was not uncommon to find middle managers considering data driven transformations as a “nice to have” or simple ways to copy the competition, or something to do from a PR perspective. It was also not unusual for large businesses to look down on new tech startups as if looking at kids playing with unnecessary & fancy toys.

There was a lot of: “we do things this way because it works”, “our customers don’t need this”, “open source is dangerous”, “we do not need fancy machine learning for our simple problems” and so on.

There was also a lot of confidence in existing practices, something to the extent of: “I have been doing this for 20 years, I know better. My expertise beats your data science kid…”

To be clear a lot of the skepticism was healthy and based on more than gut feelings, but overtime the most negative voices disappeared. Nevertheless, a question still remains actual when starting any data driven & AI powered transformation: “What value will it deliver considering the unknowns and costs?”

Is it just headcount reduction? Or reduced wastes? Or better products? How will we measure all of this?

I am not going to answer those questions in general, in my view it would be silly to do so. How technology driven innovations can benefit a given solution to a problem is very contextual, not something we can broadly generalise.

I will though share an example, and a good example I believe. An example where value is upfront and easy to measure: How much value a simple machine learning stock trading algorithm can deliver when put against a simple rule of thumb trading strategy?

Disclaimer: I am not an experienced stock trader, I have only started recently following the sharp devaluation of the Japanese Yen (I am based and work in Tokyo). Do not reach out for any investment advise.

In any case, I thought it would be interesting to quickly put the two trading strategies against one another. Both very simple and devised within few days in the small window of time I have between my kids falling asleep and my inability to keep my eyes open.

The two strategies are as follows (on a daily trading routine) and are based only on S&P 500 Stocks:

  • Rule Of Thumb: Every day check S&P 500 stocks and select top 50 stocks once sorted by strength of linear growth in the past 15 days, and by how much the price has gone down in the past 3 days. Buy the 50 ones that had the strongest linear growth, but lost the most value in the last 3 days. The rationale is that the price should bounce back to regress to the mean of the linear growth
  • Machine Learning: Create a model that basis 15 days of S&P 500 stock price history it gives the probability that the price will grow the next day. Buy stock proportionally to that probability, when the probability is > 50%. The model is trained with 50 random dates data from the first part of 2023.

As you can notice even the Machine Learning approach is really simple, we are feeding the model only 15 days of stock price data and creating only 3 features actually: avg 15 days growth, avg 15 days volatility and last 3 days price change (as a ratio). This is super basic, and indeed only used to compare with the basic Rule of Thumb approach. The idea is that we only want to assess the value of the Machine Learning algorithm (a simple logistic regression) and keep things as similar as possible to the basic Rule of Thumb strategy.

Here are the results! (trading from July 23 to June end 24, price/value indexed at the first day of trading):

Now it does not look like a massive win, but we are looking at a return on the investment of ~50% as opposed to ~15% (S&P index gains) over a year. It is also of note that the ML model was actually doing worse when the S&P500 index overall was going low in the in the second half of 2023. The rule of thumb instead seems to basically track the S&P500 index overall.

To be honest I could have given up on the ML trading algorithm when the S&P500 index was going down actually, but overall it is interesting to see what such a simple ML model could achieve in a year.

When I showed that chart to my wife she said: “You made up this chart with your dreams I think… stick to the ETF, but keep dreaming”. I do not blame her skepticism, above chart looks to good to be true from Jan 24 onward.

In order to get more clarity and “curb my enthusiasm” I have tried the same comparative analysis in a different time when the S&P500 index was indeed quite volatile, from July 22 to June end 23 (ML model trained with 50 dates first half of 2022).

Below are the results:

It looks like in more volatile times the Machine Learning trading algorithm tracks pretty close the S&P500 index, but we also see that the Rule of Thumb approach does not transfer well as it fails miserably.

It is also of note that over 250 trading days, the ML approach beat the S&P index 142 days and the Rule of Thumb approach 245 trading days.

In conclusion no Jim Simons level wizardry (look Jim Simons up if interested in algorithmic trading) but as for our purpose, the example shows pretty clearly what value Machine Learning can add even in a very simple setting and with highly volatile data.

What would then be possible integrating additional data sources like businesses performance, macro, searches of stock terms on the internet, Investor updates data and more?

Forecast: AI assistants will enable even further “scientization” of business practices and decision making, but we will need people able to articulate the solutions to a variety of audiences.

Note: A decent book on financial analytics below, based on R code wise, but easy to transfer to Python if needed: