# 3 Reasons Taguchi chose a squared Loss Function

In a __previous post__, we discussed the 𝗧𝗮𝗴𝘂𝗰𝗵𝗶 𝗟𝗼𝘀𝘀 𝗙𝘂𝗻𝗰𝘁𝗶𝗼𝗻:

*Loss = k (x – Target)^2*

and how it is used to estimate losses; especially those associated with *overly tight tolerances*.

__Here are 3 of the reasons Taguchi chose a square loss function:__

▶️ A squared term is the first *symmetric* term in the 𝗧𝗮𝘆𝗹𝗼𝗿 𝗦𝗲𝗿𝗶𝗲𝘀 {remember those?} of functions that locally converge using a power series

💭 i.e., even if the ‘true’ loss function 𝘄𝗮𝘀 𝗻𝗼𝘁 squared, the squared function would still be an *approximation* of the ‘true’ function]

▶️ The statistical variance:

Variance = E [(x - mu)^2 }

(which is also a squared function), is a measure of **risk**

▶️ Since **cost is additive** (total cost = cost1 + cost2 +…), use of a variance-like (squared) function is appropriate since **variance is also additive** (total variance = variance1 + variance2+…) for uncorrelated random variables

It's important to know the 𝗪𝗛𝗬 of things.

*<I hope this post has been of value to you; if so, share with a colleague and click the small heart icon below; it lets me know to create more content like this.>*