ย 

3 Reasons Taguchi chose a squared Loss Function


In a previous post, we discussed the ๐—ง๐—ฎ๐—ด๐˜‚๐—ฐ๐—ต๐—ถ ๐—Ÿ๐—ผ๐˜€๐˜€ ๐—™๐˜‚๐—ป๐—ฐ๐˜๐—ถ๐—ผ๐—ป:


Loss = k (x โ€“ Target)^2


and how it is used to estimate losses; especially those associated with overly tight tolerances.


Here are 3 of the reasons Taguchi chose a square loss function:


โ–ถ๏ธ A squared term is the first symmetric term in the ๐—ง๐—ฎ๐˜†๐—น๐—ผ๐—ฟ ๐—ฆ๐—ฒ๐—ฟ๐—ถ๐—ฒ๐˜€ {remember those?} of functions that locally converge using a power series


๐Ÿ’ญ i.e., even if the โ€˜trueโ€™ loss function ๐˜„๐—ฎ๐˜€ ๐—ป๐—ผ๐˜ squared, the squared function would still be an approximation of the โ€˜trueโ€™ function]


โ–ถ๏ธ The statistical variance:


Variance = E [(x - mu)^2 }


(which is also a squared function), is a measure of risk


โ–ถ๏ธ Since cost is additive (total cost = cost1 + cost2 +โ€ฆ), use of a variance-like (squared) function is appropriate since variance is also additive (total variance = variance1 + variance2+โ€ฆ) for uncorrelated random variables


It's important to know the ๐—ช๐—›๐—ฌ of things.


<I hope this post has been of value to you; if so, share with a colleague and click the small heart icon below; it lets me know to create more content like this.>

16 views

Related Posts

See All
ย