Did PMI screw OPM3 up?
One can hardly blame the newly hired PMI employee whose job it was to administer the OPM3 program in its final year of development for making well-meaning decisions that nearly destroyed OPM3 later. He was doing the best he could. But he had no expertise in any PMI standards and held no PMI certifications. He had no experience in maturity models or remotely similar assessment protocols. He joined the OPM3 Program in its final stage of development. And he had no experience coordinating volunteers like those on the OPM3 program whose subject matter expertise far exceeded his own. These facts may have conspired against him.
Snatching Failure from The Jaws of Victory
This PMI manager immediately discounted the previous four years of research that created the OPM3 Capability Statements and did an end-run around the approved governance structure for the program. Without informing the OPM3 program's leadership, this person engaged someone who had no prior experience in the work of developing the Capability Statements and no understanding of OPM3's logical data model, and instructed that one person to create a list of "high level" assessment questions with zero input from the hundreds of professionals who had developed OPM3 to date.
He presented this set of "high level assessment" questions as a fait accompli. The result was catastrophic: it had nothing to do with the OPM3 Capability Statements apart from the use of similar words. But he prevailed in forcing these into the book, forcing the longstanding manager of the OPM3 program to resign. Although OPM3 cannot be implemented with these "high level" questions and no organization can increase its maturity level with these questions, PMI markets those questions as the "Best Practice Self-Assessment."
These "high level" assessment questions were the source of the schism that fragmented OPM3, separating the Capability Statements from PMI's OPM3 book, and misleading countless OPM3 users for more than a decade. One might say this was a case of good intentions devoid of expertise, but perhaps this was a bureaucrat who should have left this work to the experts. As a result of forcing the "high level" assessment questions into the OPM3 book, users were encouraged to use those "high level" questions, which were completely ineffectual in helping users to identify appropriate improvements to increase their level of maturity in OPM3. Likewise, having put the "high level" questions in the OPM3 book, PMI compounded the error by removing the OPM3 Capability Statements from the OPM3 book, making those Capability Statements the basis of an expensive certification scheme that PMI marketed for nearly a decade, finally terminating the certification program late in 2015, begging the question "Will PMI re-insert the Capability Statements into the OPM3 book, and will they simultaneously eradicate the high level questions (also known as the 'Best Practice Self-Assessment') once and for all?"
Solving A Problem That Didn't Exist
He agreed consumers needed an assessment protocol but clearly did not understand that the OPM3 Capability Statements were designed for this very purpose. He also failed to understand that OPM3 is modular by domain and that OPM3 assessments occur in stages (i.e. starting with Standardization), facts that obviate the need to summarize the entirety of OPM3's Capability Statements into an abbreviated list of high level assessment questions (because in practice one is never confronted with the need to assess all of OPM3's Capability Statements at once).
Creating New Problems From Unnecessary Solutions
By dumbing down the Capability Statements into a short list of generic questions, one could no longer identify which Capability Statements an organization had satisfied. A typical example is that a single "high level" question would ask if three or four processes were standardized, allowing only a single "yes/no" response to the question as a whole.
Making Round Holes For Square Pegs
The first problem with this format is that a single "yes/no" answer to a question that pertains to multiple processes does not allow the respondent to clarify which processes one wishes to reply yes versus processes one wishes to reply no. If a question asks whether you have implemented six processes, how do you respond if you have implemented three of them? A simple "yes" or "no" will not answer the question, as you need the option to answer separately for each process.
Encrypting Common Words Without Saying So
The second problem is that the questions were filled with jargon that was left undefined by virtue of abbreviating the questions. For example users did not know what "standardized" meant because this was spelled out by the Capability Statements but not the "high level" assessment questions. Standardization requires a) process governance, b) documented processes, c) training of personnel, and d) consistent implementation of work methods. But users would see the "high level" question, which did not ask about any of these elements, and they would respond "Well, we have a template for the output of this process, so I guess it's standardized," even if there was no consistency whatsoever in the way that respondent's organization performed that process.
Or instead of asking whether something was "standardized," many high level questions would ask if something were "measured," not specifying as the Capability Statements do that one is actually asking about a myriad of aspects of measurement, e.g. whether the critical characteristics of a process were identified, or whether metrics had been identified for those characteristics, or whether users understood the system of process inputs and outputs, and so on. The same was true for jargon like "controlled" and "continuously improved."
In this manner, the improvements that the user's organization needed to make were hidden, and users tended to rate their organizations as scoring higher than they should have (a kind of very common prejudice well explained by Nobel Laureate Daniel Kahneman), which would lead to upsets later when they discovered the reality of their predicament. Users of these "high level" assessment questions cannot be blamed for answering those questions incorrectly nearly 100% of the time (often never learning they had done so).
Contriving Nonsense With No Basis In The Research
It was worse for a large number of questions that pertained to so-called "Organizational Enablers," meaning about two hundred Capability Statements in OPM3 that correspond to cultural and environmental factors covering an extremely wide variety of topics. For these the situation was far worse than using undefined jargon because every single Organizational Enabler is unique (and not one part in a category of parts that are all articulated using nearly identical language "of a piece" as are, for example, all Capability Statements pertaining to standardization of project management processes, or measurement of portfolio management processes, or control of program management processes).
Because each Organizational Enabler is unique, one could not merely decode frequently occurring jargon to unravel the mystery of the "high level" questions intended to represent them. One needs full sentences to represent each Organizational Enabler and to distinguish one Organizational Enabler from another Organizational Enabler, distinctions that were completely lost when multiple Organizational Enablers were summarized by a single "high level" assessment question. In many cases, as many as a dozen Organizational Enablers were combined and reduced to a single question.
This means the "high level" questions contrived to represent OPM3's Organizational Enablers could not be explained even if one were to define jargon, and the result was "high level" questions that were not only cryptic but altogether unrepresentative of OPM3's underlying Capability Statements. There was simply no correlation to the original model (and therefore absolutely no way to identify the appropriate improvements any organization should undertake in order to increase its maturity level and develop capabilities in project, program, and portfolio management).
For these reasons, OPM Experts LLC never uses the "high level" assessment questions. OPM Experts LLC has extensive empirical data proving without a doubt that use of the high level questions in PMI’s OPM3 book produces erroneous results that are always controverted by a corresponding assessment using the Capability Statements. One wonders why any "OPM3 Consultants" would have used these "high level" assessment questions if they knew what they were doing.
Masking The Nonsense With Meaningless Numbers
These kinds of issues persist today, plaguing the "high level" assessment questions thematically. But exacerbating the problem is the fact that users are expected to tally up the number of questions they answer "yes" to and divide that by the total number of questions to generate an overall percentage score treated as one's "maturity rating," which is completely useless. In this manner, users are given a numerical score but have no idea whatsoever which specific improvements need to be made in order to address the OPM3 Capability Statements and increase their level of maturity to produce actual capabilities in project, program, and portfolio management.
The Most Remarkable Thing
When one begins to understand the depth of this unintended deception, one naturally becomes angry and sees that the arc of OPM3's adoption since its publication has not been a function of its merits but of an unnecessarily contrived pathology.
Despite the institutionalized red herrings embodied by the "high level" assessment questions, OPM3 has flourished when experts have intervened. But it should have flourished far beyond the intervention of such experts. And it still can if PMI will eradicate the "high level" questions and redirect users to the content they should have been told to use in the first place: the Capability Statements. It is literally an edit that could be made to the OPM3 book in less than 10 minutes.
However PMI has been withholding the Capability Statements from customers since 2015, claiming that they are evaluating OPM3's value in the marketplace while they simultaneously announced that they bought an individual OPM3 Consultant's maturity assessment consulting company. PMI announced that PMI will use that subsidiary going forward to offer maturity assessments for hire using their own employees as consultants and using a proprietary model that is not OPM3. Many people who are aware of this pivot by PMI have questioned these monopolizing decisions, specifically the appropriateness of commercializing a separate offering in competition with OPM3 while withholding from users intellectual property created by volunteers (who created that IP with the understanding that it would remain available to users). The community of OPM3 users has been waiting patiently for PMI to clarify the situation before taking further action.
The Bottom Line
In short there simply was no need to "dumb it down," and doing so has caused immeasurable harm. Using the "OPM3 Best Practice Self-Assessment" questions is a worst practice. These "high level" questions are why "OPM3 Online" failed. And unfortunately, these same questions remain in PMI's OPM3 book today, compounding the error of excluding the Capability Statements from that book (which PMI did in order to make the Capability Statements the basis of an expensive certification scheme they only recently terminated). Leaving the "high level" questions in the OPM3 book and not reinserting the Capability Statements into the OPM3 book will lead eventually to OPM3's demise if left uncorrected.
A Call to Action
For the larger context of this problem, click here.