Unfortunately while there’s no shortage of measurements promising to keep government agencies on track, developing meaningful performance metrics isn’t easy. For starters, many agencies aren’t sure how to determine what exactly they should be measuring. Others struggle to select metrics that communicate a project’s true value. What’s worse, failure to capture the right data can result in a loss of time, money and manpower for even the best-intentioned department.
Yet ask CTOs like Claire Bailey, who’s also director of the Arkansas Department of Information Systems (DIS), and they’ll tell you that it’s worth all the hand-wringing to establish effective metrics.
Making the Right Metrics
The Arkansas DIS provides more than 28 categories of services, including telephony, data networking and technical consulting for a variety of public entities. Brokering and managing telco services for the state is a highly regulated process that requires the DIS to keep close tabs on what it bills for its services — and the money it collects in return. Upon close examination, however, the DIS discovered that it wasn’t recovering the appropriate costs on its long-distance communications services.“The DIS was having a huge problem in that there were a lot of costs the agency couldn’t account for,” recalled John Talburt, professor of information science and engineering at the University of Arkansas, who worked with Bailey to establish clear metrics.
Enter effective metrics. First, the DIS examined what it pays for its long-distance services. Next, the agency looked at what it was billing to provide these services and to which public agencies. By measuring these two variables, the DIS determined that it was only billing for a portion of the long-distance services it resells to the public sector.
“The metrics showed that there were actually minutes lost that were not being billed because our data didn’t have an appropriate customer identifier built into the system,” Bailey said. “When we analyzed the data, we found some phone calls that were being made that didn’t align to specific customers. Therefore, we didn’t know where to bill them. Metrics helped us identify a gap between what we were paying for the service and what we were billing for the service.”
It’s a discovery that wouldn’t have been possible without the DIS’ use of effective data-quality metrics. “The DIS’ [under-recovered costs] were measurable because they could easily look at how much money they were spending on wholesale communication and what they were recovering based on the itemized bills that were given by the agency,” Talburt said. “That was a very measurable product.”
Crunching Numbers
But effective metrics are more than just a high-tech sleuthing tool. By pinpointing exactly where the agency was leaking funds, the DIS avoided passing down additional fees to its customers.“We would have had to raise the rates, which would have a fiscal impact on our customers if we didn’t correct the issue,” Bailey said. “By identifying the root cause and correcting the issue, we were able to save the state money and not inflate the long-distance rate. Now, the under-recovery of long-distance costs is no longer in existence. In fact, we were able to lower our rate.”
That’s not to suggest, however, that formulating effective metrics is a cut-and-dried endeavor. Take the recent trials and tribulations of the Miami-Dade County Department of Solid Waste Management. For years, one measure of the department’s performance was the tonnage of illegally dumped waste it removed from roadsides and other areas in the county. To reduce these “unsightly piles of waste,” the agency began collecting garbage more often, said Chris Rose, the department’s deputy director. The aggressive clean-up strategy reduced the total amount of illegal dumping, since people tend to add to existing trash piles. Fewer existing trash heaps meant fewer people were piling additional garbage on top.
Unexpected Challenges
But the Solid Waste Management department suddenly faced a conundrum. The department’s success in reducing the amount of illegal dumping earned it a red flag from its performance tracking system.“The tonnage collected had been going down over time and therefore was showing up as ‘red’ because we weren’t collecting as much material as before,” said Rose. “Once we delved into the problem, we began calling it a ‘good’ red because it meant we were getting out and catching the piles faster.”
But that’s not all. “We had to start viewing the metrics in the context of total tonnage, the number of piles present and the speed at which we were responding to those piles,” Rose said. “All of that context made us realize that a lower tonnage of illegally dumped material was not a bad thing.”
The department faced a quandary. On the one hand, the garbage collection metrics made it falsely appear as if the agency was sleeping on the job. On the other hand, tweaking the metrics to reflect the agency’s new garbage collection strategy could look suspicious.
“We don’t want to be viewed as rigging the totals,” Rose said. Or worse, external factors such as spring break could result in an uptick in illegally dumped material, requiring the agency to modify its metrics yet again.
“As soon as you change metrics to fit your current situation, external circumstances can change on you,” warned Deborah Silver, Miami-Dade County’s director of information and technology services. “You shouldn’t be flipping metrics on a monthly basis. There has to be a balance.”
So far, the department has opted to leave the metrics as is, but that could change soon, Rose said.
“As long as the information is internal and doesn’t go too far outside of the department, we know what it means,” Rose said. “But if it gets published, we’re going to have to change it, because someone who doesn’t do this day-to-day won’t catch the context.”
Developing a Strategy
Fortunately there are steps agencies can take to ensure their metrics accurately represent a department’s successes and failures. For starters, the Miami-Dade County Solid Waste department’s approach to crafting effective measures involves biannual reviews of its measurements — and accepting that there’s no such thing as ideal metrics.“Creating effective metrics is more of an art than a science,” Silver said. “You have to play with them so you see the relationships [between data sets]. You may find some of your metrics are wrong, or some are close to being right or that they need adjusting. But if you wait for perfection, you won’t get there either.”
In addition to revisiting its chosen metrics annually, the Arkansas DIS relies on a strategic planning process to ensure its measurements are up to date and involve the right data.
The Arkansas DIS has the right idea, according to Paul Arveson, senior associate and founder of the Balanced Scorecard Institute. “A good strategy mapping practice is really key to establishing the right metrics,” Arveson said. “The strategy map shows how you connect the dots.”
That’s because a strategy map meticulously details a government agency’s vision of what will be achieved via metrics, relevant policies and procedures, as well as the perspectives and objectives with which effective metrics should align.
“We like to build the strategy map with a cross-functional team that cuts across the organization so that everybody has to figure out how to align [metrics] to the vision of the organization,” he said.
By developing strategic plans that evaluate success, Arveson said agencies are more accountable for the results of their actions. Better yet, by accurately demonstrating the value of particular projects and its impact on the overall department — whether it’s providing telco services or collecting garbage — agencies stand a better chance of justifying their IT spending.
Working Together
Another way agencies are creating effective metrics is through team work. “For five years now, we’ve had a monthly business review meeting where we bring in all senior staff and we put key metrics up on the wall so we can see and comment on them,” Rose said. “It makes it relatively easy to put a lot of data in one place, post our objectives and the measures that support them. Every month, we look at the information and talk about what it means to us.”Talburt also supports a collaborative approach to creating effective metrics and selecting the proper data sets for evaluation. “The strategy now to improve information quality overall is to have governance,” he said. Such governance includes the use of a RACI (Responsible, Accountable, Consulted and Informed) or Responsibility Assignment Matrix, which defines and describes the roles of the various parties involved in completing a task for a particular project.
“It’s the idea that all stakeholders in the organization who have anything to do with an agency’s information, which is virtually everybody, have a seat at a table at a governance council,” Talburt said. “What happens is the people in the organization are responsible or accountable throughout a project.”
Making sure a number of professionals from a variety of departments have a vested interest in applying up-to-date and accurate metrics can ease the metrics design and review process. In the end, though, government agencies must remember that establishing metrics isn’t about gauging productivity levels or making stockholders happy. After all, said Rose, “We’re not in the business to make a profit, but to serve the public.”
Arveson agrees. “It’s not about the money,” he said. “It’s about mission success, mission performance.” And by creating metrics with the right amount of flexibility, strategy mapping, collaboration and governance, a government agency can create clear path to mission effectiveness.
Cindy Waxer is a journalist whose articles have appeared in publications including the The Economist, Fortune Small Business, CNNMoney.com, CIO and Computerworld. cwaxer@sympatico.ca