Skip to main content
Thought Leadership

Costly Skepticism in the Auditing Industry

Professor Joseph Brazel talks to a gathering of scholars in an auditorium

Skepticism is a foundational characteristic for financial statement auditors, and it needs reinforcement.

But Joseph Brazel, Jenkins Distinguished Professor of Accounting in the Poole College of Management, has found evidence that supervising auditors sometimes discourage their subordinates’ skepticism by fixating on the cost of investigations that don’t identify misstatements in the financial statements.

Brazel calls that “costly skepticism,” and his work on the subject reached an influential audience when George R. Botic, a board member of the Public Company Accounting Oversight Board (PCAOB), referenced it during a March 20 talk at Virginia Tech. Botic cited Brazel’s research in calling for the industry to “actively encourage and reward professional skepticism, regardless of the outcome.”

We caught up with Brazel later in April to learn more about the effects of costly skepticism, how corporate boards and audit supervisors can create environments that encourage inquiry, and what sparked his interest in the topic.

What does “costly skepticism” mean?

When you think of skepticism, you’re thinking of a questioning mind and a critical evaluation of evidence. In my field, we’re talking about evidence that either supports or contradicts the financial statements of corporations. 

So the quandary that financial statements auditors have is that they’re expected to be skeptical. But 97 times out of 100, they don’t find gold at the end of the rainbow. If you’re digging further, you may go over budget as you spend additional time and accumulate additional evidence. So all of it’s costly. 

What we see in our research is that those costs are viewed as a normal cost of business when you find something, but viewed much more negatively if you don’t. And we’ve seen that the supervisors of these auditors review their skepticism with hindsight bias–with knowledge of the outcome. So the managers say, “Well, why did you even look? You knew where you were gonna end up.” But that’s easy to say when you know the very end of the story. What’s important is that if evidence suggests you should dig further, then you actually dig. You can’t control the outcome. 

In our studies, we see for people who dig and don’t find something, their performance evaluations are significantly lower than the people that dig and find something. In those experimental studies, they have the same evidence, same fraud red flag present, same amount of time spent, same costs, except in one situation, there’s a reasonable explanation for the inconsistency they observed. And in the other one, they find something.

What are the costs of costly skepticism?

With these audit engagements, time is money. And you’re going to spend a lot more time investigating red flags and anomalies. Audit fees are typically fairly fixed, so those more hours means more cost, which means a less profitable engagement. It’s all about spending more time to come to a conclusion that we otherwise would have.

The other thing is that you’re delaying the work that you are assigned to do in the future, because you’re still digging on your current assignment. That work is either delayed or handed off to one of the people on your audit engagement team. So they have to take on that load. 

The only additional cost we see is that when we ran a study with corporate management, we found that they convey information to the audit partner in charge of the engagement about the audit professionals who are auditing their financial statements. If that auditor digs a lot and doesn’t find something, they’ll say to that person’s partner, “Hey, settle this person down.” So that’s an additional cost where they’re going behind your back and letting your partner know that you may be asking too many questions. 

If you find something wrong, that’s seen as the cost of doing business. But if you don’t find something, it’s often viewed as wasted time.

How did you get into studying skepticism?

I was doing research related to people either not detecting fraud red flags in financial statements, or not responding properly. Every time we looked at a fraud red flag, it’s in published financial statements that have been audited and gotten a clean opinion. The question is: How did they get through their screening? That’s where you get into skepticism and incentive problems, and studying that’s been the better part of about 12 or 14 years of my life.

How has the industry responded to your research?

All of this research has impacted the dialogue related to professional standards in relation to skepticism, both internationally and domestically. My research has been funded by international and U.S. stakeholders in the audit profession. So they’re obviously interested, from a standard-setting and practice perspective. 

But then it’s also been supported by the largest audit firms. They’ve really been interested in understanding how they can improve their skepticism and what variables really have a big impact.

A lot of my studies are looking at how different stakeholders or players affect the financial reporting process. How do they impact skepticism? For example, the audit committee of a company’s board of directors is in charge of the audit. What support can they provide to the audit engagement team to back them up when they have to ask hard questions of management? We’ve done quite a bit of work in that area. 

When we look at the rewarding of skepticism, what’s very important is that your rewarding of skepticism is consistent. We find a very strange effect that when you just flip from being a person who penalizes skepticism to someone who rewards it, it actually leads to less skepticism amongst the staff. They think, “Hey, I got the benefit of the doubt, that’s not going to happen again.” So a repetitive, consistent reward that people can rely on is very important.

The other thing firms are very interested in is using advanced data analytics to analyze more and more data. For example, if you’re looking at every sales transaction versus a smaller sample to test sales at Amazon, you have to understand that until you get those tools incredibly refined, you’re gonna have a lot of false positives. And “false positive” is just another way of saying “costly skepticism.” There’s an anomaly that you need to investigate, but we know it’s more than likely not going to be a misstatement. 

A recent study suggests that about 80% of audits identify at least one material misstatement in the financial statements. However, the number of material misstatements in the financial statement is normally low – like 3 or 4 for example. One thing I stress to my students is that you’re talking about having to identify a low likelihood event. So it takes a lot of digging to find it. It’s an iterative process, and that repeating slows down if you’ve got a boss who doesn’t have your back and cares more about the cost of the audit than the actual quality.

What does it mean to you as a researcher when your work is referenced by PCAOB?

It means a lot to me. One of the reasons I love Poole College is we’re high on impact. It doesn’t end with the journal article. We’re trying to change practice and improve whatever area we do research in, whether it’s marketing, finance, or supply chain. We want to make the world a better place in that field, and have the people that get to pull the strings–from regulators to standard setters–seeking us to answer the questions they’re interested in. 

So PCAOB acknowledging that is part and parcel of why I love Poole College because we’re really about doing rigorous academic research that is scientifically valid, but then take it out to the world and convey it. I really value our “Think and Do” mentality.

This post was originally published in Poole Thought Leadership.