Skip to Content
 Close search

Report on Government Services 2022

PART A, SECTION 1: RELEASED ON 25 JANUARY 2022

1 Approach to performance measurement

The Report on Government Services (the Report) provides information on the equity, efficiency and effectiveness of government services in Australia, which contributes to the wellbeing of all Australians by encouraging improvements in these services. The Report is used by governments to inform planning and evaluation of policies, for budgeting (including to assess the resource needs and performance of government agencies) and to demonstrate government accountability.

This Report provides a dynamic online presentation underpinned by machine readable data in a CSV format as well as data provided in Excel format. The online presentation is being updated to move information on Interpretative material within each service section from a separate PDF document to HTML format on the webpages and co-located with relevant Context or Indicators results. This change has been made for Parts B, E and F for the 2022 Report, and will be completed for Parts C, D and G for the 2023 Report.

Reasons for measuring performance

Measuring the performance of government service delivery and public reporting creates incentives for better performance by:

  • helping to clarify government objectives and responsibilities
  • promoting analysis of the relationships between agencies and between programs, enabling governments to coordinate policy within and across agencies
  • making performance more transparent through informing the community
  • providing governments with indicators of policy and program performance over time
  • encouraging ongoing performance improvements in service delivery and effectiveness, by highlighting improvements and innovation.

A key focus of the Report is on measuring the comparative performance of government services across jurisdictions. Reporting on comparative performance can provide incentives for service providers to improve performance where there is no or little competition, and provides a level of accountability to consumers, who have little opportunity to express their preferences by accessing services elsewhere.

The terms ‘comparative performance reporting’ and ‘benchmarking’ are sometimes used interchangeably. However, benchmarking can have a particular connotation of measuring performance against a predetermined standard. The Report can be considered as a form of results or process benchmarking, but the Report does not generally establish best practice benchmarks. However, governments can use the information in the Report to identify appropriate benchmarks.

The Report’s scope

Government provides a range of services to individuals, households and the community. The Report focuses on ‘social services’, which aim to enhance the wellbeing of people and communities by improving largely intangible outcomes (such as health, education and community safety). The Report contains performance information on child care, education and training, health, justice, emergency management, community services, social housing, and homelessness across 17 service areas. The service areas included in the Report were chosen based on a set of formal criteria.

Read the formal criteria

Government recurrent expenditure on the services is approximately $301 billion (figure 1.1) — a significant proportion of government recurrent expenditure at around 72 per cent. This expenditure is equivalent to about 15 per cent of gross domestic product (estimates based on data from ABS 2021).

Figure 1.1 — Governments’ recurrent expenditure by sectora

Changes in sector expenditure over time can be partly due to the reallocation of services between sectors in line with broad policy shifts (or changes in the data source). Readers are encouraged to check service areas within each sector to confirm coverage for the relevant year.

Governments use a mix of methods to deliver these services to the community, including providing services directly (a ‘delivery/provider’ role), funding external providers through grants or the purchase of services (a ‘purchaser’ role) and subsidising users (through vouchers or cash payments) to purchase services from external providers.

As non‑government organisations are often involved in the delivery of services, funding from government may not meet the full cost of delivering a service to the community. Since the purpose of the Report is to provide information to assist governments in making decisions about the effectiveness and efficiency of government purchase or supply of services, it is confined to the cost to government. Similarly, it does not provide detailed information on general government income support. For example, the Report covers aged care but not the aged pension and child care but not family payments (although descriptive information on income support is provided in some cases).

Performance across agencies and jurisdictions will be affected by a range of factors outside government influence, such as geography, available inputs and input prices. The Report does not attempt to adjust reported results for differences that can affect service delivery (though some indicators incorporate adjustments where aligned with other national indicators, for example, adjustments for case mix for hospital separations in section 12). The approach used is to explain that government‑provided services are often only one contributing factor and, where possible, point to data on other key contributing factors, including different geographic and demographic characteristics across jurisdictions. Section 2 contains detailed statistics on each State and Territory, which may assist in interpreting the performance indicators presented in the Report.

Conceptual approach

The Report uses a consistent conceptual approach for reporting performance across service areas. This allows for comparisons in performance across services, improvements in reporting in one service area to be drawn upon for reporting in other areas, and issues that arise across service areas to be addressed in a consistent way.

The performance indicator framework

Each service area in the Report has a performance indicator framework and a set of objectives against which performance indicators report (figure 1.2). Performance indicators include output indicators, grouped under equity, effectiveness and efficiency, and outcome indicators.

Figure 1.2 — General performance indicator framework

Framework

The framework reflects the process through which inputs are transformed into outputs and outcomes in order to achieve desired objectives (figure 1.3). Service providers transform resources (inputs) into services (outputs). The rate at which resources are used to make this transformation is known as ‘technical efficiency’.

Figure 1.3 — Example of a service process — school education

School

The impact of these outputs on individuals, groups and the community are the outcomes of the service. In the Report, the rate at which inputs are used to generate outcomes is referred to as ‘cost effectiveness’. Although no explicit cost‑effectiveness indicators are currently included in the Report, implicit cost‑effectiveness reporting is achieved through combinations of efficiency and effectiveness indicators, and combinations of efficiency and outcome indicators.

Objectives

Each service area has a set of objectives against which performance is reported. The structure of objectives is consistent across service areas and includes three components:

  • The high-level objectives or vision for the service, which describes the desired impact of the service area on individuals and the wider community.
  • The service delivery objectives, which highlight the characteristics of services that will enable them to be effective.
  • The objectives for services to be provided in an equitable and efficient manner.

Indicators that are linked to the high-level vision are outcome indicators, whereas indicators that report on the effectiveness of service delivery, or how equitable or efficient the service delivery is, are output indicators. These are discussed in more detail below.

The objectives in this Report are similar across jurisdictions. However, the priority given to each objective can vary. For example, one jurisdiction might prioritise improving accessibility and another might prioritise improving quality. The Report focuses on the extent that each shared objective for a service has been met.

Output indicators

While the Report aims to focus on outcomes, these are often difficult to measure. The Report therefore includes measures of outputs, where there is a relationship between those outputs and desired outcomes. Output information is also critical for equitable, efficient and effective management of government services, and is often the level of performance information that is of most interest to individuals who access services.

Equity, effectiveness and efficiency indicators are given equal prominence in the Report’s performance reporting framework, as they are the three overarching dimensions of service delivery performance. It is important that all three are reported on as there are inherent trade-offs in allocating resources and dangers in analysing only some aspects of a service. For example, a service provided may have a high cost but be more effective than a lower‑cost service, and therefore be more cost effective. In addition, improving outcomes for a group with special needs may lead to an increase in the average cost per unit of providing a service.

Equity indicators

Equity indicators measure how well a service is meeting the needs of particular groups that have special needs or difficulties in accessing government services. The equity–access indicators focus on measuring if services are equally accessible to everyone in the community regardless of personal characteristics such as cultural background or location. Effectiveness indicators can also have an equity dimension when the focus is on any gap in performance between special needs groups and the comparison/general population (for example, readmissions to hospital within 28 days of discharge, by Indigenous status). Equity of outcomes is also reported on under outcome indicators in some sections.

Criteria are used to classify groups that may have special needs or difficulties in accessing government services. Some service areas have specific target groups identified; the groups most often identified across the Report are:

  • Aboriginal and Torres Strait Islander people
  • People living in rural or remote areas
  • People from a non-English speaking background
  • People with disability (whose access to specialist disability services is also reported in section 15).

To measure equity of access, the Report often compares the proportion of the community in the special needs group with their proportion in the service user population. This approach is suitable for services that are provided on a virtually universal basis (for example, preschool education), but must be treated with caution for other services where service provision is based on the level of need. Ideally for these latter services, comparisons should be made across special needs groups on the basis of need (for example, disability services uses potential populations for each special needs group).

Effectiveness indicators

Effectiveness indicators measure how well the outputs of a service meet its delivery objectives. The reporting framework groups effectiveness indicators according to characteristics that are considered important to the service. For most sections, these characteristics include access, appropriateness and quality.

Access

Access indicators measure how easily the community can obtain a service. Access indicators can generally be categorised under three domains:

  • Overall access indicators show how readily services are accessed by those who need them across the eligible or relevant population (for example, access to specialist disability services is measured according to the ‘potential population’ based on disability rates). Due to difficulties in directly measuring access, indirect measures are often included, such as measures of unmet need (section 15) or enrolment in preschool (section 3).
  • Timeliness of access indicators are important for services where there is limited supply of services, sometimes resulting in consumers experiencing delays accessing those services. For example, waiting times for health services, such as public dentistry and public hospitals (sections 10 and 12).
  • Affordability indicators are included for service areas where consumers face at least part of the cost of the service and cost can be a barrier to obtaining the service. For example, the proportion of income spent on particular services, such as parents’ out-of-pocket cost of child care (section 3), or the proportion of people who delayed getting or did not get a prescription filled at any time in the previous 12 months due to cost (section 10).
Appropriateness

Appropriateness indicators measure how well services meet clients’ needs. Appropriateness is distinct from access, in that it is measuring performance in meeting the needs of people who already have access to the service. For example, whether students achieve their main reason for training (section 5).

Appropriateness indicators also seek to identify whether the level of service received is appropriate for the level of need (HWA 2012; Birrell 2013). Some services have developed measurable standards of service need, against which levels of service can be assessed (for example, the ‘overcrowding’ measure in housing (section 18) measures the appropriateness of the size of the dwelling relative to the size and composition of the household). Other services have few measurable standards of service need; for example, the desirable number of medical treatments for particular populations is not known.

Quality

Quality indicators measure whether a service is suited to its purpose and conforms to specifications. Information about quality is particularly important when there is a strong emphasis on increasing efficiency. There is usually more than one way in which to deliver a service, and each alternative has different implications for both cost and quality. Information about quality is needed to ensure all relevant aspects of performance are considered.

The approach in the Report is to identify and report on all aspects of quality including both actual and implied competence:

  • Actual competence can be measured by the frequency of positive (or negative) events resulting from the actions of the service.
  • Implied competence can be measured by proxy indicators, such as the extent to which aspects of a service conform to specifications.

Quality indicators in the Report generally relate to one of four categories:

  • Standards — whether services are accredited and/or meeting required standards, such as legislation. For example, compliance with service standards for aged care services (section 14).
  • Safety — whether services provided are safe. For example, road safety and deaths in police custody (section 6).
  • Responsiveness — whether services are client orientated and respond to clients’ stated needs. For example, measures of patient satisfaction (sections 10 and 12).
  • Continuity — whether services provide coordinated or uninterrupted care over time and across service providers. For example, community follow-up after psychiatric admission (section 13).
Efficiency

Economic efficiency requires satisfaction of technical, allocative and dynamic efficiency:

  • Technical efficiency requires that goods and services be produced at the lowest possible cost.
  • Allocative efficiency requires the production of the set of goods and services that consumers value most, from a given set of resources.
  • Dynamic efficiency means that, over time, consumers are offered new and better products, and existing products at lower cost.

The Report focuses on technical (or productive) efficiency. Technical efficiency indicators measure how well services use their resources (inputs) to produce outputs for the purpose of achieving desired outcomes. Government funding per unit of output delivered is a typical indicator of technical efficiency — for example, cost per annual hour for vocational education and training (section 5).

Some efficiency indicators included in the Report are incomplete or proxy measures for technical efficiency. For example, as only the cost to government is reported on, some efficiency measures do not include the full cost of providing services and, are therefore, incomplete measures of technical efficiency. Other indicators of efficiency, such as partial productivity measures, are also reported on where there are shortcomings in the data. For example, judicial officers per finalisation (section 7).

In addition, some service areas report on the cost per head of total/eligible population, rather than the cost per person actually receiving the service or another unit of output. These are not measures of technical efficiency, but the cost of providing the service relative to the total/eligible population.

Outcome indicators

Outcome indicators provide information on the overall impact of a service on the status of individuals and the community, as opposed to output indicators, which report on the characteristics of service delivery. Outcomes may be short or longer term and the approach in the Report is to use both types of outcome indicators, as appropriate. In school education, for example, learning outcomes at years 3, 5, 7 and 9 may be considered intermediate outcomes, while completion of year 12 or school leaver destinations may be considered final outcomes (section 4).

In contrast to outputs, outcome indicators:

  • typically depend on a number of service characteristics
  • are more likely to be influenced by factors outside the control of governments or entities delivering services.

Guiding principles for the Report

Along with the conceptual approach, the guiding principles provide the basis for reporting performance across service areas (box 1.1). There are often trade-offs that need to be made across the principles; for example, between the accuracy of data and their timeliness. Sometimes data that are provided in a timely manner have had less time to undergo rigorous validation. The approach in the Report is to publish imperfect data that is available, where it is fit for purpose, with the necessary caveats. This approach allows increased scrutiny of the data and reveals the gaps in critical information, providing the foundation for developing better data over time. Important information about data quality is included in the relevant sections and attachment tables. More information on data quality for some indicators and measures is available from external data providers including the ABS and AIHW. Data Quality Statements for National Agreement indicators and datasets maintained by the AIHW can be accessed here:

Box 1.1 — Guiding principles for the Report

Comprehensiveness — performance should be assessed against all important objectives.

Streamlined reporting — a concise set of information about performance against the identified objectives of a sector or service should be included.

A focus on outcomes — high‑level performance indicators should focus on outcomes, reflecting whether service objectives have been met.

Hierarchical — high-level outcome indicators should be underpinned by lower‑level output indicators and additional disaggregated data where a greater level of detail is required.

Meaningful — reported data must measure what it claims to measure. Proxy indicators should be clearly identified and the development of more meaningful indicators to replace proxy indicators is encouraged where practicable.

Comparability — data should be comparable across jurisdictions and over time. However, comparability may be affected by progressive data availability. Where data are not yet comparable across jurisdictions, time series data within jurisdictions is particularly important.

Completeness and progressive data availability — aim to report data for all jurisdictions (where relevant), but where this is not possible report data for those jurisdictions that can report (not waiting until data are available for all).

Timeliness — data published are the most recent possible. Incremental reporting when data become available, and then updating all relevant data over recent years, is preferable to waiting until all data are available.

Use acceptable (albeit imperfect) performance indicators — relevant performance indicators that are already in use in other national reporting arrangements are used wherever appropriate.

Understandable — data must be reported in a way that is meaningful to a broad audience, many of whom will not have technical or statistical expertise.

Accurate — data published will be of sufficient accuracy to provide confidence in analysis based on information in the Report.

Validation — data can vary in the extent to which they have been reviewed or validated (at a minimum, all data are endorsed by the provider and subjected to peer review by the Working Group for the relevant service area).

Full costing of services — efficiency estimates should reflect the full costs to government (where possible).

Source: Adapted from Ministerial Council for Federal Financial Relations (MCFFR) (2009).

Costing of services

In addition to the Review objective that expenditure on services be measured and reported on a comparable basis, efficiency estimates should also reflect the full costs to government. Issues that have affected the comparability of costs in the Report include:

  • accounting for differences in the treatment of payroll tax (SCRCSSP 1999)
  • including the full range of capital costs (SCRCSSP 2001)
  • apportioning applicable departmental overhead costs
  • reporting non-government sourced revenue.

Payroll tax

The Steering Committee’s preference is to remove payroll tax from reported cost figures, where feasible, so that cost differences between jurisdictions are not caused by differences in payroll tax policies. However, in some sections it has not been possible to separately identify payroll tax, so a hypothetical amount is included in cost estimates for exempt services.

Capital costs

Under accrual accounting, the focus is on the capital used (or consumed) in a particular year, rather than on the cash expenditure incurred in its purchase (for example, the purchase costs of a new building). Capital costs comprise two distinct elements:

  • Depreciation — defined as the annual consumption of non-current physical assets used in delivering government services.
  • User cost of capital — the opportunity cost of funds tied up in the capital used to deliver services (that is, the return that could have been generated if the funds were employed in their next best use).

Both depreciation and the user cost of capital should be included in unit cost calculations (with the user cost of capital for land to be reported separately) As well, the user cost of capital rate should be applied to all non-current physical assets, less any capital charges and interest on borrowings already reported by the agency (to avoid double counting). The rate applied for the user cost of capital is based on a weighted average of rates nominated by jurisdictions (currently 8 per cent).

Differences in asset measurement techniques can have a major impact on reported capital costs (SCRCSSP 2001). However, the differences created by these asset measurement effects are generally relatively small in the context of total unit costs, because capital costs represent a relatively small proportion of total cost (except for housing). In housing, where the potential for asset measurement techniques to influence total unit costs is greater, the adoption under the Commonwealth/State Housing Agreement (replaced by the National Affordable Housing Agreement from 1 January 2009, and then the National Housing and Homelessness Agreement from 1 July 2018) of a uniform accounting framework has largely prevented this from occurring. The adoption of national uniform accounting standards across all service areas would be a desirable outcome for the Review.

Other costing issues

Other costing issues include the apportionment of costs shared across services (mainly overhead departmental costs) and the treatment of non-government sourced revenue.

  • Full apportionment of departmental overheads is consistent with the concept of full cost recovery. The practice of apportioning overhead costs varies across the services in the Report.
  • The treatment of non-government sourced revenue varies across services in the Report. Some services deduct such revenue from their efficiency estimates. Ideally when reporting technical efficiency for services which governments provide directly, the estimates should be reported both including and net of revenues. Some services report net of revenue only, this is usually in cases where the amounts concerned are relatively small (for example, courts). The costs reported are therefore an estimate of net cost to government. However, where revenue from non‑government sources is significant (such as with public hospitals, fire services and ambulance services), both the gross cost and the net cost to government are reported, in order to provide an adequate understanding of efficiency.

References

ABS (Australian Bureau of Statistics) 2021, Australian National Accounts: National Income, Expenditure and Product, Australian National Accounts, June 2021, https://www.abs.gov.au/statistics/economy/national-accounts/australian-national-accounts-national-income-expenditure-and-product/latest-release (accessed 12 November 2021).

Birrell, B. 2013, Too many GPs, Centre for Population and Urban Research Report, Monash University, Melbourne.

HWA (Health Workforce Australia) 2012, Health Workforce 2025 – Doctors, Nurses and Midwives – Volume 1, Adelaide.

MCFFR (Ministerial Council on Federal Financial Relations) 2009, Intergovernmental Agreement on Federal Financial Relations (Intergovernmental Agreement), www.federalfinancialrelations.gov.au/Default.aspx (accessed 3 January 2013).

SCRCSSP (Steering Committee for the Review of Commonwealth/State Service Provision) 1999, Payroll Tax in the Costing of Government Services, Productivity Commission, Canberra.

—— 2001, Asset Measurement in the Costing of Government Services, Productivity Commission, Canberra.

Printed copies

This publication is only available online.

Publications feedback

We value your comments about this volume and encourage you to complete and submit the publications feedback form.