top of page

3 Reasons Taguchi chose a squared Loss Function


In a previous post, we discussed the 𝗧𝗮𝗴𝘂𝗰𝗵𝗶 𝗟𝗼𝘀𝘀 𝗙𝘂𝗻𝗰𝘁𝗶𝗼𝗻 (TLF). The TLF states that a deviation of the quality characteristic (x) from its Target value incurs a loss proportional to the square of the deviation:


Loss = k (x – Target)^2


Here are 3 reasons why Taguchi’s formula is pure genius…👇



🚀️ 𝗥𝗲𝗮𝘀𝗼𝗻 #𝟭: 𝗦𝘆𝗺𝗺𝗲𝘁𝗿𝘆: A squared term is the first symmetric term in the Taylor Series {remember those?} of functions that locally converge using a power series


[💭 i.e., even if the ‘true’ loss function 𝘄𝗮𝘀 𝗻𝗼𝘁 squared, the squared function would still be an approximation of the ‘true’ function]



🚀𝗥𝗲𝗮𝘀𝗼𝗻 #𝟮: 𝗥𝗶𝘀𝗸

Taguchi wanted his loss function to reflect 𝗿𝗶𝘀𝗸 and the statistical 𝘃𝗮𝗿𝗶𝗮𝗻𝗰𝗲 (a squared function):

Variance = E [(x - mu)^2 }


is just such a measure. Higher variance means lower consistency and greater unpredictability, leading to higher 𝗿𝗶𝘀𝗸.



🚀𝗥𝗲𝗮𝘀𝗼𝗻 #𝟯: 𝗖𝗼𝘀𝘁

Since 𝗖𝗼𝘀𝘁 is 𝘢𝘥𝘥𝘪𝘵𝘪𝘷𝘦 (total cost = cost1 + cost2 + …), the use of a variance-like (squared) function makes sense since 𝘃𝗮𝗿𝗶𝗮𝗻𝗰𝗲 (var) is also 𝘢𝘥𝘥𝘪𝘵𝘪𝘷𝘦 (total var = var1 + var2 + …) for uncorrelated random variables!



<I hope this post has been of value to you; if so, click the small heart icon below; it lets me know to create more content like this.>

94 views

Related Posts

See All

BLOG

New posts added regularly. To be among the first to know, subscribe.

bottom of page