# 3 Reasons Taguchi chose a squared Loss Function

In a __previous post__, we discussed the ๐ง๐ฎ๐ด๐๐ฐ๐ต๐ถ ๐๐ผ๐๐ ๐๐๐ป๐ฐ๐๐ถ๐ผ๐ป:

*Loss = k (x โ Target)^2*

and how it is used to estimate losses; especially those associated with *overly tight tolerances*.

__Here are 3 of the reasons Taguchi chose a square loss function:__

โถ๏ธ A squared term is the first *symmetric* term in the ๐ง๐ฎ๐๐น๐ผ๐ฟ ๐ฆ๐ฒ๐ฟ๐ถ๐ฒ๐ {remember those?} of functions that locally converge using a power series

๐ญ i.e., even if the โtrueโ loss function ๐๐ฎ๐ ๐ป๐ผ๐ squared, the squared function would still be an *approximation* of the โtrueโ function]

โถ๏ธ The statistical variance:

Variance = E [(x - mu)^2 }

(which is also a squared function), is a measure of **risk**

โถ๏ธ Since **cost is additive** (total cost = cost1 + cost2 +โฆ), use of a variance-like (squared) function is appropriate since **variance is also additive** (total variance = variance1 + variance2+โฆ) for uncorrelated random variables

It's important to know the ๐ช๐๐ฌ of things.

*<I hope this post has been of value to you; if so, share with a colleague and click the small heart icon below; it lets me know to create more content like this.>*