# Given the past success of a product, how can we try to predict its future success?

It seems like a harmless question and not too hard. A product which sold 20% of the times it was shown to a costumer has a 20% chance of being bought if we show it to the next costumer, right?

Well, not exactly. In the above example we are using the formula $VS $, where $S$ is the number of Sales and $V$ the number of times a product was shown (i.e. visits). If a product has no sales yet, it is a bit unfair to say that it has zero chances of being bought.

# So how would you solve that problem?

We will follow the steps below:

- Let $X$ be a random variable representing the ‘true’ probability of a product converting. I’m using true as an intrinsic property of the product. We can also view it as the conversion rate of the product as the number of visits goes to infinity.
- We will find the pdf $f(θ)$ of $X$
- We will find the expectation of the product converting, i.e. $E[X]$

All of the steps above are the same as the ones on the previous post on How to AB Test. As such I will skip the calculations and go directly to the results:

- $X$ will follow a Beta distribution
- Letting $S$ be sales and $N=V−S$ be non-sales, i.e. visits without a sale, $f(θ)=Γ(S+1)Γ(N+1)Γ(S+N+2) x_{S}(1−x)_{N}$
- $E[X]=∫_{0}Γ(S+1)Γ(N+1)Γ(S+N+2) x_{S}(1−x)_{N}dx=N+2S+1 $

Hence for a product with $V$ visits and $N$ observations we should expect future clients to buy the product with probability $N+2S+1 $.

# Cool, we are done then.

Not yet. Depending on the market / platform where you are selling products, you might have very different conversion rates. Usually they are fairly below $50$%. Lets assume for a second that all products we are selling average to a conversion rate of $10$%. When a new product comes ($S=0$ and $N=0$) then we would predict a future conversion rate of $50$%, but intuitively a prediction of $10$% should be better.

# Yes, that is a problem. How do you fix that?

If we look back at step 2 above we have made subtle assumption: we assumed that we have no prior information about our hidden random variable $X$. If we know nothing about the product, then we are indirectly assuming that $X$ is uniform on $[0,1]$ until we get new information. If you are wondering why assuming no knowledge about $X$ leads us to assuming $X$ is uniform on $[0,1]$, the reason is that the best measure of our knowledge about a system is Entropy and the Uniform Distribution maximizes Entropy. If you want to learn more about this I would recommend reading through Claude Shannon’s A Mathematical Theory of Communication. In the meantime I will go back to the question at hand.

We assumed we had no knowledge about $X$ when we fact we do. As such we can make use of Bayesian Updates.

# How do we do that?

We will use Bayes’ Theorem, $P(A∣B)=P(B)P(B∣A) P(A)$. In other words the probability of $A$ being true is updated after observation $B$ by the factor $P(B)P(B∣A) $. That formula works well for discrete cases, but for continuous cases we get a slightly different formula:

If $f(θ)$ is our distribution as above and $g(θ)$ our initial knowledge of the distribution, then our updated knowledge is given by $f(θ)g(θ)$ normalized, since a pdf must always integrate to 1. In a no knowledge situation we get $g≡1$ so $f(θ)g(θ)=f(θ)∗1=f(θ)$ as we concluded before. However in our case we have the knowledge that the average product sells $10$% of the time.

# How do we translate that into the function $g$ ?

We will have to use our historic data. Assuming we have sold enough different products, then each product has its own conversion rate. As such the set of all our products has its own distribution. If we were to pick one product at random from our inventory then we would be taking a product with conversion rate modeled by that distribution. Hence we can think of adding a new product similar to taking the conversion rate from a product in our inventory. As such we have $g$ and $f$.

# So we just multiply the 2 distributions, normalise and we are done?

That will certainly give us an estimation way better than our initial estimate. However there is one computational problem. Since our inventory might be big, $g$ will probably be defined by a big array and multiplying it by $f$ and then computing $E[f(θ)g(θ)]$ might be computationally expensive.

# Can we simplify it?

There is one trick that might work, depending on our product distribution. We can try to model our initial distribution $g$ as a beta distribution. We are getting now exact with our calculations so, depending on the sales data, this might work or not. If it works then we approximate $g$ with a beta distribution with $S_{o}$ sales and $N_{o}$ non sales. Then (setting $x=θ$ for easier typing we get

$∫_{0}f(x)g(x)dxf(x)g(x) ∝f(x)g(x)∝∝Γ(S+1)Γ(N+1)Γ(S+N+2) x_{S}(1−x)_{N}Γ(S_{0}+1)Γ(N_{0}+1)Γ(S_{0}+N_{0}+2) x_{S_{0}}(1−x)_{N_{0}}∝∝x_{S}(1−x)_{N}x_{S_{0}}(1−x)_{N_{0}}∝∝x_{S+S_{0}}(1−x)_{N+N_{0}} $So after normalization $f(x)g(x)$ has pdf $Γ(S+S_{0}+1)Γ(N+N_{0}+1)Γ(S+S_{0}+N+N_{0}+2) x_{S+S_{0}}(1−x)_{N+N_{0}}$

Which is a Beta Distribution with $S+S_{0}$ sales and $N+N_{0}$ non-sales. This makes sense intuitively because we can think of $g$ as the distribution of $X$ after $S_{0}$ sales and $N_{0}$ non-sales, and $fg$ is the pdf of $X$ updated after we see $S$ more sales and $N$ more non-sales.

Our estimation for a products future conversion rate becomes $S+S_{0}+N+N_{0}+2S+S_{0}+1 $ which is both more accurate than the initial formula $NS $ and equally fast to compute.

# Great, so I should always use $S+S_{0}+N+N_{0}+2S+S_{0}+1 $ instead of $NS $.

Not really. First of all there’s the previous assumption that $g$ can be modeled via a Beta Distribution. That depends on how rigorous we want to be, but it nonetheless needs to be checked. Secondly this was all about forecasting conversion rates. For analysis related to past performances I would not suggest using this formula since it is disconnected from the truth (i.e. the products actual conversion rate). Still, any predictive tool related to Conversion Rate should be using this formula instead.

I will summarize with a few pros and cons of each predicted conversion rate.

## Division by zero:

For new products with no observations the formula $NS $ has no meaning, which can cause problems when doing some bulk analysis i.e. some machine learning on top of it

## Cold Start Problem:

The Cold Start Problem is defined by not showing enough products that haven’t been seen a lot yet. If we use the Conversion Rate as a proxy of how much we should show a given product then new products will have $CR=0$ until their first conversion, which might take a while, and until then they will be barely shown. This does not happen with $S+S_{0}+N+N_{0}+2S+S_{0}+1 $ and the opposite happens with $N+2S+1 $ which maps new products predicted $CR$ to $21 $ , which is generally above the average $CR$.

## Overvaluing new products:

This is the final statement in the last paragraph. $N+2S+1 $ overshoots the predict $CR$, while $S+S_{0}+N+N_{0}+2S+S_{0}+1 $ maps new products to the average $CR$, neither boosting nor penalizing them.