Nowa­days, cars featu­re more elec­tro­nic com­po­nents than ever, embed­ded sys­tems like bra­king, acce­le­ra­tion, navi­ga­tion, com­mu­ni­ca­tion, and dozens of others must per­form flaw­les­sly in vario­us con­di­tion becau­se human life is at sta­ke. With cur­rent, auto­no­mo­us vehic­les tech­no­lo­gy deve­lop­ment trends, the num­ber of safe­ty cri­ti­cal embed­ded sys­tems will only incre­ase and requ­ire more attention.

In Code­lab, we mostly ope­ra­te in indu­stries with very high secu­ri­ty stan­dards and have many years of expe­rien­ce in Auto­mo­ti­ve pro­jects, the­re­fo­re our orga­ni­za­tio­nal pro­ces­ses are ali­gned to Auto­mo­ti­ve SPI­CE® stan­dard and focu­sed on con­stant impro­ve­ment. Fixing pro­blems quic­kly is signi­fi­cant but the only thing that can pre­vent mista­kes from hap­pe­ning, is fin­ding the root cau­se and tac­kle it pro­per­ly. For Root Cau­se Ana­ly­sis we used to use 5 Whys tech­ni­que, deve­lo­ped almost a hun­dred years ago by Saki­chi Toy­oda. This is a very popu­lar tool, espe­cial­ly in Lean Mana­ge­ment. Howe­ver, we quic­kly reali­zed that, regar­dless of some advan­ta­ges, this tech­ni­que seems to be too sim­pli­stic for our needs. Too often the final answer of inve­sti­ga­tion is ‘human error’, too often it enta­ils disco­ura­ging bla­me cul­tu­re. We felt the need to find power­ful, effec­ti­ve and social­ly con­scio­us tool for post mor­tem ana­ly­sis. We got inspi­red by Infi­ni­te Hows method, tho­ro­ugh­ly descri­bed by John All­spaw and also Nick Sten­ning with Jes­si­ca DeVi­ta.

The method does not sim­ply chan­ge one word to ano­ther. Repla­cing Whys with Hows comes pri­ma­ri­ly with a dif­fe­rent mind­set and effort put to ask bet­ter questions, the­re­fo­re get more valu­able answers. In the fol­lo­wing part, I will pre­sent this topic with more details.

Infi­ni­te Hows method

A per­fect start for any inve­sti­ga­tion is to ask ‘why’, but in the end, ine­vi­ta­bly chan­ges to who is respon­si­ble. Jud­ging a spe­ci­fic per­son won’t help the pro­ject team with either lear­ning or improving.

Let’s take an exam­ple. If we start 5 Whys ana­ly­sis with the question: ‘why the deli­ve­ry was late?’ we will pro­ba­bly learn the root cau­se of the pro­blem is either the mana­ger doesn’t have suf­fi­cient mana­ge­ment skills or some­one on the team is not skil­led or tra­ined eno­ugh to deli­ver tasks on time. Yes, tra­ining is impor­tant, but we don’t need to do a pro­per ana­ly­sis to come to this conc­lu­sion, and it doesn’t help with under­stan­ding the event, more­over impro­ving, and lear­ning from mista­kes. Asking people why they did some­thing mul­ti­ple times may put them on the defen­si­ve and make them spe­ak less fran­kly, espe­cial­ly when being asked by some­one more power­ful in the organization.

When using Infi­ni­te Hows method we start asking: ‘how did we made the deli­ve­ry?’ it gives us an oppor­tu­ni­ty to learn how we eva­lu­ated the sco­pe of the work, how much the time pres­su­re was expe­rien­ced, how often deli­ve­ry delays hap­pen, how the appro­ach for coding and testing was cho­sen, and the list goes on and on.  Asking ‘how’ lets us under­stand the con­di­tions that allo­wed the failu­re to hap­pen, gives wider per­spec­ti­ve and more valu­able data. It allows us to com­pre­hend the who­le sto­ry and find out what was respon­si­ble for the error. The shift of respon­si­bi­li­ty from who to what not only helps with under­stan­ding, lear­ning, and making pro­ject impro­ve­ments but also keeps a respect­ful, open min­ded and enga­ging wor­king environment.

To work with Infi­ni­te Hows method, we need to start with under­stan­ding people’s local ratio­na­li­ty.

Local ratio­na­li­ty

It is obvio­us when we con­si­der our own actions and deci­sions that we try to do what makes sen­se to us at the time. We belie­ve that we do reaso­na­ble things given know­led­ge and under­stan­ding of the pro­blem at a par­ti­cu­lar moment. In most cases when we make a deci­sion, we think it’s the best, ratio­nal way. Other­wi­se, we wouldn’t have done it. This is known as the ‘local ratio­na­li­ty prin­ci­ple’. Our ratio­na­li­ty is local impli­ci­tly becau­se its limi­ted to our mind­set, know­led­ge, capa­bi­li­ties, goals, and to the amo­unt of infor­ma­tion that we can han­dle as well. Whi­le usu­al­ly accept this limi­ta­tion for our­se­lves, we often use dif­fe­rent cri­te­ria for others. We assu­me that they sho­uld have or could have acted dif­fe­ren­tly, based on our cur­rent, post-inci­dent know­led­ge. That’s why we are so eager to look for guil­ty ones during failu­re inve­sti­ga­tion. It’s natu­ral, human ten­den­cy to alter­na­te solu­tions to life events which alre­ady had occur­red. But aga­in, whi­le coun­ter­fac­tu­al thin­king is temp­ting it does not convey infor­ma­tion abo­ut com­plex situ­ation, envi­ron­ment, and a pro­blem itself.

Asking bet­ter questions, leading inte­rviews in a more empa­the­tic man­ner, ana­ly­sing pro­blems from bro­ader per­spec­ti­ves is a con­ti­nu­ous lear­ning pro­cess. The­re is no sim­ple manu­al. As a result, com­plex ana­ly­sis is time-con­su­ming and doesn’t give a sim­ple answer, howe­ver it doesn’t mean a weak ana­ly­sis. It is the ana­ly­sis that makes us learn and any failu­re pre­ven­tion depends on that learning.