StatMind
Management Research, Educational Services & Training

Blog

2019-10-18

High Performance Organizations and the Guru’s Clothes

High Performance Organizations and the Guru’s Clothes
[right]Whosoever desires constant success must change his conduct with the times.[/right]
[right]Macchiavelli[/right]
[justify]Most of us with experience in organizations would endorse the desirability of good management. Both across and within organizations people have different views of what constitutes good management. We can come up with anecdotal evidence on critical events affecting the performance of the organizations we work for. We seldom think of the gradually but ever-changing role of management in society. Insightful readings would be Chandler’s The Visible Hand: The Managerial Revolution in American Business (Chandler, 1977), and Burnham’s The Managerial Revolution (Burnham, 1941). Burnham reasoned that a new society would emerge in which managers take the position of the ruling class. Likewise, one of Chandler’s propositions held that that a managerial hierarchy would become a source of power, permanence and continued growth. McLaren (2011) notes that in a knowledge economy, managers still hold legitimate authority while losing control of the means of production, as these means become skilled professionals.[/justify]
[justify]Against this background, it is not surprising that management and leadership continue to prominent – overrated, I would say - issues in models on high performance organizations. Given the time it takes to revolutionize societies, it is not surprising either that these models are rigid, and fail to go with the flow. While Burnham’s predicted the managerial revolution to take place in the Depression Era and interwar period, with signs of the end of capitalism, current trends like globalization; digitization; automated production may lessen or at least change the need for professional managers. [/justify]
[justify]Whatever shape the future takes, and the speed of change, the quest for a universal theoretical model on High Performance Organizations (HPO) is a far cry from the reality we live in. Still, we are “blessed” with self-acclaimed gurus – consultants, mostly – who have managed to complete the quest for the holy grail. One example of these self-acclaimed gurus is André de Waal, founder of an (or the) HPO framework, who has brought us the holy grail. De Waal has developed a model consisting of five factors, consisting of a total of 35 items, which, I quote from the website of his HPO Center, “[..] will [font=Calibri, sans-serif]improve the financial and strategic [..] results of your organization. The [/font][font=Calibri, sans-serif]HPO-framework [/font][font=Calibri, sans-serif]is thus [/font][font=Calibri, sans-serif]the only scientifically validated technique in the world that will help you make your organization high performing”.[/font] [/justify]
[justify][font=Calibri, sans-serif]The holy grail is [/font]a cup with miraculous powers that provide eternal youth and infinite abundance. While these pay-offs have not been encountered yet in any organization, the miraculous nature is at odds with the alleged scientific validation. But the claim that this framework stands out as the only one in its family that is scientifically validated, impresses. To make a comparison, in the field of competition and competitiveness, Michael Porter is pretty famous – maybe with guru status. Googling “competitiveness” with and without “Porter” reveals that Porter somehow appears in 1 out of 7 hits. A similar indicator for “Waal” on his topic “high performance organizations” reveals a score of 1 out of 1,000 or so. That by itself does not deny De Waal’s guru status, of course. But since there are evidently so many writings on high performance organizations that do not refer to De Waal, it is thinkable that at least one of them refers to research that is scientifically valid. Finding a couple of needles in this haystack, is a feasible strategy to deny the claim of being the only one that has been validated. Another tactic is to show that De Waal’s framework is not scientifically valid. More relevant, and easier to achieve. Here we go. [/justify]
[justify]1.[font=Times New Roman]    [/font]Why Factors?[/justify]
[justify]De Waal distinguishes five factors: continuous improvement; openness & action orientation; management quality; employee quality; and long-term orientation. The idea is that improving on these factors causes organizational performance to go up. But in none of the follow-up studies that are meant to lend support to the model, is organizational performance taken into consideration. [/justify]
[justify]In addition, there is the question of how organizational performance is measured. Likely candidate indicators are growth, profit, market share, employee and customer satisfaction, and so on. A model with, say, customer satisfaction as the dependent variable, is bound to have a time lag, as internal improvements won’t lead to increased customer satisfaction overnight. That is, longitudinal studies would be needed. There are no systematic longitudinal studies on HPO.[/justify]
[justify]2.[font=Times New Roman]    [/font]Why These Five Factors?[/justify]
[justify]De Waal once started with factor-analyzing data on many items drawn from literature, and ended up with these five factors and 35 items. Comparison of De Waal’s framework to alternative models, raises questions on why other factors are not included. Time and again testing the framework using the retained five factors and 35 items only, does not guarantee that all potentially relevant factors and items are covered. Indeed, a study in Nepal with more factors and more items, revealed that factors not included in De Waal’s framework, have a higher impact on perceived performance. [/justify]
[justify]In reverse, it is questionable that all five factors and 35 items of the standardized questionnaire are relevant. Most follow-up studies indicate that items or even complete factors do not fit the structure implied by De Waal’s framework. Unfortunately, the question why items don’t match, is left unanswered.[/justify]
[justify]3.[font=Times New Roman]    [/font]Five Factors?[/justify]
[justify]Studies using De Waal’s framework – in most cases implemented by De Waal himself – are based on a survey of employees and managers within one organization. By itself, it is interesting to note that respondents who are working in the very same environment, come up with diverging scores on the five factors. The underlying idea of factors is that they describe various dimensions of performance, and ideally the correlations between the factors are low. But this is not the case at all. The factors are highly correlated. There may be several causes. [/justify]
[justify]One is that indeed the dimensions are correlated. Some respondents perceive, for example, poor management quality and low openness, while others think highly of management and openness; and since the factors go hand-in-hand, the correlations are expected. But then, why distinguish factors? To give you an idea, tests on the database of De Waal, show that any random pick of five items out of the 35 items, would give a Cronbach’s α reliability well above the benchmark of 0.70. [/justify]
[justify]An alternative explanation is response bias, and/or a halo effect. Some respondents feel more negative about the organization as a whole, and hence give lower scores to all items than other respondents. Or, some respondents seek scores in the middle, while others want to express their views by seeking the extremes – even though their true opinions are the same. One way of adjusting for these biases, is to normalized scores per respondent. Interestingly, a factor analysis of normalized data suggests more factors. Applying the revised model with more factors, on the non-normalized data, suggests a much better fit; that is, the “true” structure may be obscured by response bias. Since De Waal never corrected for response bias, his once-and-for-all developed five-factor model is likely to be a misspecification. [/justify]
[justify]4.[font=Times New Roman]    [/font]Atomistic Fallacy[/justify]
[justify]This misspecification, however, is irrelevant. Whether we have five factors, or more factors, the fact remains that at the end of the day all factors are still strongly intercorrelated. It is academically interesting to understand the dimensionality in the items as measured, but there’s no more to it. Even if we accept the five-factor structure as being valid (covering all relevant aspects of performance in the specific case, or organization, at hand), then the best we can do with the data is to relate the scores of respondents on these factors, to their subjective perceptions of organizational performance. If the HPO-scores suffer from response bias, so will scores on perceived performance. That is, respondents with low scores on the HPO-factors, will give low scores on performance too. Et voila, we have a model in which performance is predicted by HPO-scores![/justify]
[justify]The essence of organizational performance, is that it’s a trait of the organization – the research unit. It’s the organization that makes a profit, grows, and holds a market share. Cross-sectional research within one and the same organization, cannot be used to explain variation in organizational performance, for the simple reason that organizational performance does not vary within the design of the research. Perceptions of respondents on current performance vary – which is psychometrically interesting but methodologically irrelevant. HPO-research suffers from a form of atomistic fallacy, as it uses responses from employees to draw conclusions on the organization. The sole use of the HPO-data, is to compute an average score (or five average scores, one per factor), and then make a statement like it’s higher or lower than expected, or higher of lower than for other organizations. De Waal uses a completely arbitrary 8.5 (on a 10-point scale) as a benchmark for being an HPO.[/justify]
[justify]5.[font=Times New Roman]    [/font]Low Reliability[/justify]
[justify]The traditional approach in developing scales uses multiple items that indicate (measure) a common concept. In order to assess the reliability of the scale, Cronbach’s α is used. The value for α goes up with the correlations between the items, and with the number of items. In De Waal’s model, for example, the factor Management Quality, has 12 items that are highly intercorrelated. Item Response Theory (IRT) explicitly takes into consideration the difficulty of items, which enables the researcher to measure subjects at the full length of the construct (here, views on Management Quality). IRT usually reveals that scales based on the traditional approach, with highly correlated items, are reliable in the middle part, but perform poorly at the extremes. This holds for De Waal’s scales. Traditional reliability measures look great, but actually they perform poorly at the parts of the scale where our interest lies: the high-performers. [/justify]
[justify]In sum, there is good reason to assume that De Waal’s framework is not covering all dimensions of organizational performance. [/justify]
[justify]From a short to medium term perspective, there is a plethora of similar studies that include other aspects, and it would be foolish to disregard these studies on the basis of a lack of scientific validation. Strikingly, in our own HPO-researches, interviewees – in a qualitative stage preceding the structured survey – mentioned many aspects that hardly fit in with de Waal’s model, implying poor face validity. Within the De Waal’s model, the dimensionality is questionable, but, as we have argued, irrelevant for two reasons. First of all, one single dimension of any random pick of, say, five items would suffice to tap into respondents’ views on the HPO-status. But secondly, and more importantly, the variation in respondents’ views cannot be used to explain organizational performance as the dependent variable. [/justify]
[justify]In the long-run, the ever-changing role of management – give trends like the speed of innovations and the accompanying reliance on professionals, data-driven processes, the shift from large corporations to networks of smaller entities – justifies innovations (shall we say continuous improvement, long-term orientation and openness?) of HPO-frameworks. [/justify]
[justify]The HPO-framework by De Waal has been inspired by the quest for the Holy Grail of management. The Holy Grail is based on miraculous powers, and I consider it likely that the HPO-framework actually does work. If an organization believes in the model, its effectiveness is a self-fulfilling prophecy. It’s just that other organizations, making use of alternative models or not believing in any specific model, are as likely to improve as long as they believe in something – preferably in themselves. [/justify]
[justify]With gurus, my question always is, do they really believe in the stuff they are telling (and selling) us? If they do, like in the case of De Waal, it’s touching though close to pathetic, that they are truly convinced of having stumbled on something worthwhile. Gurus of any kind are dangerously naïve, especially if they tread on academic paths and confuse anecdotal evidence and casuistry with academic rigor. Old guys with grey beards are not always carriers of wisdom. Don’t judge gurus by the clothes they think they are wearing. [/justify]

Robert - 16:04:47 | Een opmerking toevoegen

Hier kunt u content plaatsen.

Deze tekst past u aan door erop te klikken.