Accuracy in Action | Insights for Professionals

Designing Reports for Humans (and Machines)

Written by Accurate Digits | 05/06/2025 11:13:18 PM

What Our AI Reviewer Struggled With… Your Finance Team Probably Does Too

Every table tells a story — but some are easier to follow than others.

As part of our ASX Top 20 review, we ran thousands of calculations through Accurate Digits, our AI-powered platform that checks financial reports for numerical consistency. Think of it like spell check for numbers —it simply asks: do the numbers add up?

In many cases, the AI breezed through the checks. Totals tied. Percentages made sense. Everything aligned neatly.
In others, it slowed down or flagged potential issues that turned out to be structural or formatting quirks — not actual errors.

We noticed a pattern:

Where our platform struggled to interpret the logic of a table, the report was more likely to contain real-world errors.

🎓 Accurate Digits Is Like a Graduate Accountant

It helps to think of Accurate Digits like a graduate accountant.

  • It’s been trained on the fundamentals of maths, accounting and finance
  • It understands calculations, structure, and how financial statements work
  • But it hasn’t spent years inside your business, learning your formatting quirks or decoding your team’s spreadsheet logic

So when a table follows a logical structure — subtotal, line items, labelled movements — it performs well.
When structure breaks down, so does performance.

And that’s probably exactly what happens to members of your finance team, and your auditors — particularly the junior ones who are doing the double checks.

If a table makes a reviewer pause to ask, “Wait, what is this referencing? How does this calculation work?” — you’re not just slowing them down, you’re increasing the risk that genuine errors go unnoticed.

🔍 Common Pitfalls That Trip Up AI (and People Too)

These were some of the most common structures where our platform flagged confusion — and where actual errors were more likely to appear:

  • Movement columns without clear labels.
    It's often unclear whether a column represents a dollar change or a percentage change, leaving too much to interpretation.
  • Ambiguous subtotals, especially with multiple hierarchy levels.
    Inconsistent formatting, mixed indentation styles, and subtotals placed both above and below line items make it harder to trace how figures roll up.
  • Ratios that reference unclear or unexpected rows.
    Some ratios appeared to reference nearby data but didn’t. Others linked to rows buried deep in complex hierarchies, without clear structural cues to guide the reader.
  • Inconsistent table formatting.
    One table might follow a clear left-to-right logic with subtotals at the bottom, while the next breaks that pattern entirely — forcing readers to re-learn how to read each layout.

None of these break any rules.

But they do increase the chance of error — and not just for AI.

If your reviewer has to pause to figure out where a subtotal came from or scroll back to decode a percentage, you’ve lost clarity.

And when that happens, reviews take longer — and the risk of errors slipping through goes up.

 

🧠 Final Thought: If AI Finds It Easy, So Might Everyone Else

This isn’t about designing for robots — it’s about making sure the people reading your report aren’t second-guessing your intent.

If our platform — trained, structured, and objective — struggles to follow the logic, there’s a good chance your team, your auditors and your investors are struggling too.

The easiest way to reduce risk?

Design your reports the way you'd want your graduate accountant to understand them. Because in many cases — that’s who’s reading them first.