Personal Project
AI Work Quality Checker
Result: Built a tool that learns a supervisor's correction patterns and flags issues before work is submitted
February 2026
The Problem
Everyone has a boss who catches the same things over and over.
“You formatted the date wrong again.” “This column should be rounded to two decimals, not three.” “We always put the client code first, then the description.” “The totals row needs to include the adjusted figure, not the raw one.”
It’s not that the work is bad. It’s that every supervisor has preferences, standards, and institutional knowledge that takes months to fully absorb. In the meantime, work goes back and forth — submitted, corrected, resubmitted, corrected again.
The employee in this case was doing solid work. But the corrections kept coming because the standards lived in the supervisor’s head, not in a system. You can’t check your work against rules you haven’t memorized yet.
The Interesting Problem
The first challenge wasn’t building the checker. It was getting the data.
All the supervisor’s feedback — corrections, preferences, recurring notes — lived in Microsoft Teams chat messages. Months of “hey, fix this” and “remember to always do it this way” scattered across threads.
No export button. No API access available. Just a chat window full of gold that you can’t programmatically touch.
The solution was practical, not elegant: screenshots. Capture the entire chat history through screen captures, then run OCR to extract the text. Not pretty. But it works, and sometimes the right answer is the one that actually gets the data out.
Once the text was extracted, the system parsed every correction into a structured rule: what was wrong, what it should have been, and what pattern to look for in future work.
What I Built
A checker that evaluates any spreadsheet or document against the supervisor’s learned standards.
You point the tool at a file. It scans every cell, every format, every calculation and compares it against 200+ patterns extracted from the supervisor’s historical corrections.
The output is a flagged report: “Row 14, Column D — this looks like the raw total, but your supervisor has corrected this three times to use the adjusted figure.” Or: “The date in B7 is in MM/DD format, but every correction in the log shows DD/MM is expected.”
The key insight: it doesn’t just check formatting rules. It learns judgment calls. When the supervisor says “this number looks too high, double-check the source” on three separate occasions for the same type of calculation, the system learns to flag outliers in that category.
It gets better over time. Every new correction from the supervisor — captured and processed — adds to the rule set. The system that catches 85% of issues in month one catches 95% by month three.
Why This Works for Any Team
Every workplace has this dynamic: experienced people correcting less experienced people on the same things repeatedly. It’s not a training problem — it’s a knowledge transfer problem. The standards exist, but they live in people’s heads and chat histories.
This tool turns implicit knowledge into explicit checks. The supervisor’s years of “always do it this way” become automated validation rules that run before anything gets submitted.
The applications go way beyond one employee and one spreadsheet:
- Accounting teams — catch formatting and calculation patterns before month-end submissions
- Data entry teams — validate entries against historical correction patterns
- Report writers — check deliverables against client-specific preferences
- Any role where someone reviews someone else’s work and keeps finding the same issues
The technology isn’t the breakthrough. The idea is: your correction history is a training dataset. Use it.
The Result
The employee’s work started landing clean. Not because they memorized every rule — but because the tool caught the things they’d miss before the supervisor ever saw them.
Supervisor review time dropped by about 70%. Not because the supervisor stopped reviewing — but because the reviews went from “fix these twelve things” to “looks good, one note on page 3.”
The relationship improved too. Fewer corrections means less frustration on both sides. The supervisor spends time on strategy instead of formatting. The employee builds confidence instead of anxiety.
That’s the part nobody puts in the project spec. But it’s the part that matters most.
Stack: Python · OCR · NLP pattern matching · AI analysis · Excel/document parsing
How it gets built
Understand the bottleneck, the data, and what success looks like.
Design the simplest solution that fully solves the problem.
Iterative development with working previews at each stage.
Handoff with documentation, training, and a 30-day support window.
Ready for results like these?
A 15-minute call is enough to scope your project and give you a real number.