Subho Banerjee, Research Program Director, ANZSOG
Successful collaboration between practitioners and academics can play a critical role in strengthening the evidence base for public policy decision-making. Yet it remains a rarity to see it done in a manner which is genuinely satisfying for both sides, and which generates real impact in the public policy process.
Public policy practitioners are required to provide decision-making advice on complex, difficult policy problems across a wide range of subject domains, often with limited information and under significant time pressure. Academics have deep specialist expertise, honed over many years of rigorous professional practice and contest, and are often seeking opportunities to apply that expertise in real-world problem-solving. There should be significant mutual benefit in making the match happen across the boundary, but profound differences in incentives, language and culture mean that, all too often, collaborating across the boundary remains a fraught process.
These difficulties can be overcome, but it requires specific attention to the techniques required to work successfully across the boundary.
In this piece, the focus is on the starting point for working together – what is the question, exactly? This is particularly important for project work, where there will be a clear and definable project output, which will be expected to answer the question posed, to the best of knowledge available.
For project work, getting the question right is foundational to the success of the collaboration. After all, as Ursula Le Guin said, ‘there are no right answers to wrong questions ’. But working out the right question is difficult, and requires the commitment of time and resources to work through the conceptual issues, and to ensure that everyone involved has the chance to come to a common understanding.
When expressing concern about collaboration, practitioners are prone to complain about a lack of practicality in academic project work. But they rarely reflect on how much of this might be driven by how they formulated the question in the first place. How well was the context spelt out? Was it made clear which constraints are really binding in the real world in this particular public policy problem? What is the relative prioritisation within the question set – what is essential, and what is optional? If this information is not set out in the framing of the question, it is hardly surprising that the academic then applies different constraints, or chooses a different prioritisation – which may reduce the practical relevance of the work.
How closely was the form of the output specified? Was it made clear what level of detail would be required for implementation? This is always likely to be contested space – a delicate judgement call between scope, budget, and duration of the work. But again, if there is no agreement up-front about the target, it is hardly surprising that the final product may not hit the mark.
And, of course, there are corresponding complaints from academics as well. Most commonly, the complaints are about poor technical specification in the question – language being used too loosely (therefore appearing ambiguous or unclear), or the scope being poorly defined (hence unrealistic about what can be delivered for the given timeframe). But occasionally the complaints are that the question is too prescriptive – either precluding a deeper exploration of the underlying drivers of an issue, or being written specifically to preconfigure a particular answer.
Loose question setting can thus explain much of the frustration felt on both sides of the practitioner/academic collaboration. But writing good questions for public policy making is a matter of considerable trade-craft – to get the right balance on issues of scope, detail, prioritisation and approach, and to do so in language that is comprehensible enough for generalists, but accurate enough for specialists.
And it is trade-craft with a serious epistemological component – you need to think carefully about the knowledge structures that apply in question setting.
My academic training was initially in experimental physics, rather than public policy. In the natural sciences, the external world is conceptualised as an objective fact – it exists in the same manner regardless of how we choose to think about it. But in fact, what we perceive of the external world always depends to some extent on how we look – as noted by Werner Heisenberg, ‘what we observe is not nature itself, but nature exposed to our method of questioning’.
It is thus important to recognise that the act of setting the question does indeed have consequences for how the problem is likely to be approached. The framing and language used in the question implicitly or explicitly presupposes a way of thinking about the problem – so great care must be taken to ensure that it leaves open enough space to think creatively about the answer.
In experimental physics, there are some questions that, once you have specified them correctly, are considered ‘trivial’ – either because the answer to the question can be derived unequivocally from theory, or because the proposed experiment is actually just a remapped version of an experiment that has been done in a different form. This might not have been obvious at the start – but once you have worked out the right way to ask the question, the answer falls out as a matter of course.
And then there are some questions which are more or less impossible to answer within current constraints of available resources or technical experimental parameters. These questions might well be very interesting to keep in mind for the future, and may in time be genuinely ground-breaking – but they are not realistic for current project work.
So the constructive space is in the middle, to find a tractable, manageable form of problem definition – something that can be done with available approaches and resources, but that holds the promise of something novel, and not wholly knowable at the start.
The degree of detail really matters to define this middle space. Another physics aphorism is that ‘everything should be made as simple as possible, but not simpler’– that is, explanations need to draw out underlying drivers, but in a manner that doesn’t assume away the intricacy of real-world experience.
This poses an acute problem in public policy problem definition – how can practitioners represent the detail of experience in a manner which enriches the investigation, but does not make the analysis impossible? The common practitioner complaint about a lack of practicality often comes from a perception that there hasn’t been enough grappling with experience – but conversely, the common academic complaint about impractical scope often comes from being expected to produce a simple and tractable analysis which still deals with all different cases under all different circumstances.
In public policy, as in physics, the real richness of insight comes from a thinking process that allows an iteration between theory and empirical experience. Wherever possible, potential hypotheses should be tested against experience to determine their validity, and refined as new experience is brought to bear.
And this iteration needs to be played out in micro in the question setting itself – which requires time and resource commitment, in proportion to the overall scale of the project. It requires domain knowledge and expertise, combined with a high degree of openness and goodwill. These are messy, difficult issues, which can be hard to codify, and hence are often best dealt with through a degree of informal conversation. So it is unrealistic to think that they can be done in a single-pass – they require working through in a structured, iterative process.
As an example, the Australia and New Zealand School of Government (ANZSOG) sought to use a more detailed, intensive question-setting process recently in producing a series of six papers on key public administration issues for the Independent Review of the Australia Public Service (APS) (also known as the Thodey review). The papers were written by a selection of senior academics and practitioners, through a process brokered by ANZSOG. They covered a wide range of topics, including public integrity, public governance, service delivery reform, interjurisdictional challenges, commissioning and contracting, and improving the use of evidence and evaluation.
The final ANZSOG APS Review papers were of the order of 20-30 pages each, and each responded to a specific public policy/administration question, expressed as a problem statement of approximately 2 pages. The overall collaboration ran over approximately 9 months in 2018-19, with individual component papers being commissioned and written in parallel at different stages in the process. Three of the six papers were done in two phases – an initial literature review and state of play, followed by a more detailed exploration of policy options (two phases of about 6 weeks each). The other three papers were done in a single phase, covering the same range of interest, but in a compressed time frame (about 8 weeks each).
The secretariat to the Independent Review, on behalf of the Review Panel, generated initial problem statements in each of these areas. But these problem statements were explicitly presented as a ‘first pass’ – to set out what they thought they wanted, but in due recognition that they were not technical experts in the domains to be covered. ANZSOG then facilitated a structured feedback process between the secretariat and the selected authors, through which the problem definition was refined. The question-setting process took a full week, even though the papers themselves were required to be provided with a tight 6-8 weeks of part-time writing time for each phase. And for the papers which went through two phases, the initial literature review stage itself was invaluable preparation for the development of much better targeted questions for the policy options phase.
In each case, the wording of the final question was refined to enhance technical accuracy, to ensure that the work would be well-targeted to the Panel’s priorities and to adjust the scope to ensure that the papers could be delivered to a high standard within the stringent time deadline. For example, with regard to public integrity, the secretariat was able to request specifically that the authors start by setting out the broader conceptual and philosophical basis for developing a positive integrity culture in the APS, rather than narrowing too quickly to specific questions of institutional design. With regard to commissioning and contracting, the authors were able to clarify up-front that the most useful discussion would be about the underlying drivers for using third parties in delivering the business of government, rather than the surface manifestation of particular uses of consultants and contractors.
The feedback from this process has been very positive. The authors felt far more directly involved in the commissioning process than usual – indeed, on one memorable occasion, one of the authors started complaining about certain phrasing in the question later on, before remembering that he had put it in himself originally! And the secretariat commenced the process with a much higher degree of confidence that the eventual output would be well-targeted to the areas of particular priority for the Review Panel. ANZSOG was able to use the refined questions as the main reference point through the process itself – to work with the authors to ensure then that these questions were directly answered in the final product, in a practical, experience-informed formulation that would assist the Panel directly in their deliberations. The eventual papers themselves have generally been well-received in this regard – including being acknowledged positively in the final Review report and in broader public discussion.
ANZSOG is seeking to institute a similar process in future commissioning exercises. We are looking to specify a period for working through the question intensively at the start of commissioned work. For smaller pieces, this might be done informally through discussion or email, but for larger pieces, this might involve specific workshops, which may also draw in other external expertise. We need to work through the language, structure and prioritisation of the question in detail, to settle scope and prioritisation as explicitly as possible. The aim is to work out a question that it is possible to answer in an interesting manner, at a useful level of detail, within given resource and time constraints. The issues need to be worked through iteratively and carefully, and the process may often need to continue even once the project is underway – as understanding grows on both sides about how to best frame the problem.
Of course, more attention to question-setting doesn’t guarantee success in and of itself. The work itself still needs to be of a high quality, and actually answer the question being asked. But such a process provides a far more solid platform for successful collaboration, through paying due respect to the epistemological complexity inherent in the question setting challenge.
 Of course, as well as project work, collaboration can also take the form of dialogue, ongoing engagement or less formal experience-sharing. Indeed, a critical part of the initial conversation about the question needs to be to determine the best form of the collaboration
 Noting that sometimes academics can bring a valuable perspective in interrogating the question itself – as is discussed later in this piece, this needs to be an iterative process.
 This is a one-sentence statement of a core issue of the epistemology of science, and so is, of course, over-simplified.
 From https://en.wikiquote.org/wiki/Werner_Heisenberg , cited as Physics and Philosophy: The Revolution in Modern Science (1958) Lectures delivered at University of St. Andrews, Scotland, Winter 1955-5.
 Aphorism attributed to Einstein – considered to be a paraphrasing of his statement that ‘the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience’. From https://en.wikiquote.org/wiki/Albert_Einstein, cited as Philosophy of Science, Vol. 1, No. 2 (April 1934), pp. 163-169., p. 165.
 This can take different forms. Some public policy interventions are amenable to small-scale experimentation and iteration, while others may involve large system change. But in all cases, the experience gained needs to be brought to bear in thinking about future designs.
 This may require some more flexibility on the government commissioning side as well – to refine the question in conjunction with the contracted party, rather than holding fast to the question set up-front as part of a tender process. As it happens, this issue is explored in further detail in ANZSOG’s APS Review paper on commissioning and contracting, as an example of the kind of reforms required in procurement processes to facilitate more successful knowledge-intensive relational contracting.
 ANZSOG papers available at https://www.anzsog.edu.au/resource-library/research/anzsog-aps-review
 Strictly speaking, the problem statement was a series of related questions on the topic
 Of course, part of respecting the goodwill in the process is not to revisit who said what to whom during working discussions in too much detail – hence the examples in this piece have been kept at a high level only
 Final Thodey APS Review report available at https://pmc.gov.au/resource-centre/government/independent-review-australian-public-service