top of page
Search

Authentic or Not: The Question Is Neither Hypothetical Nor Malicious

  • Writer: Julie O'Connor
    Julie O'Connor
  • 3 days ago
  • 7 min read

Updated: 2 days ago

When Singapore's largest bank declares documents authentic and AI returns an 18% probability, who do you trust?
When Singapore's largest bank declares documents authentic and AI returns an 18% probability, who do you trust?

Authentic or not, the question is neither hypothetical nor malicious. As AI-powered document verification tools become increasingly sophisticated, they are beginning to challenge the authority of even the most established human institutions. In a real-world case, compliance staff at one of Singapore's most trusted financial institutions first examined a set of letters in 2014. After taking almost eight weeks to then refuse to authenticate them, and after numerous internal investigations, the bank later reversed course and declared the letters authentic, with no signs of tampering. Fast forward to 2022, and lawyers acting on behalf of DBS advised Asia Sentinel editor John Berthelsen that the letters were authentic, and that I had acted with malice.

Bearing in mind that an independent forensic document examiner had already determined that one of the letters lacked credibility, when I recently saw that Singapore's regulatory body, the Monetary Authority of Singapore, was partnering with the banking industry to deploy artificial intelligence and machine learning to combat financial crime, I wondered whether AI would see the same things we had. The same documents provided to DBS were run through an AI analysis. The result was stark: an initial probability of authenticity ranging between 15% and 25%. This collision between institutional expertise and machine intelligence raises a pressing question: when humans and AI disagree, which one do we trust, and what are the consequences of getting it wrong?


I fully appreciate that the comparison is not entirely equal, and context matters enormously. The bank's experts held a critical advantage: they could physically cross-reference the documents against originals held in their own files and compare signatures. The AI, by contrast, was working blind to all of that, analysing only what was placed in front of it, a scanned copy, with no access to the bank's internal records, and no ability to examine original signatures. When provided with additional information that had also been available to DBS at the time, much as an investigative journalist pieces together a story by gathering every available thread, that probability narrowed further and held firm at 18%. Without access to the file copies of the letters, or samples of signatures, AI reached a fundamentally different conclusion.

We have recently seen precisely how consequential that lack of access to information can be. The defamation cases brought by two of Singapore's most prominent Ministers against Bloomberg, its journalist, and Terry Xu — editor of The Online Citizen — serve as a sobering reminder that working from incomplete information, however diligently, carries profound legal and reputational risk. The AI, like those journalists, could only assess what it was given. The question is whether the legitimate alarm now raised by AI will be handled with the institutional integrity the situation demands.

The gap between 15-25% and 100% does not simply represent a difference of opinion, it demands independent verification. Two assessments this far apart, each grounded in a similar set of evidence, point not to a clear answer but to an unresolved question that only further scrutiny can settle.

Before attributing the AI's findings entirely to the limitations of a scanned copy, it is worth pausing. Not all anomalies can be explained away by poor image quality. The irregularities identified in these letters are substantive and structural.


  • The document describes itself as both the 4th and 6th Supplemental letter in the same text.

  • Section F is missing entirely, the document jumps from E to G, then internally cross-references G where E is meant, two errors pointing to the same gap.

  • Sections C and D have duplicate labels, inconsistent with any professional legal drafting standard.

  • The former company name appears three months after the entity was legally renamed.

  • The word "Endeavor" appears where DBS institutional style requires the British English "endeavour."

  • A deadline is recorded as 30 August 2103 — ninety years in the future — in a document dated 2013, an error that should have been immediately apparent to any reviewing lawyer or banker.

  • The 2013 letter requires directors to maintain a minimum net worth from 30 June 2012, twelve months before the document was signed, a retroactive undertaking that is legally impossible to perform.

  • Perhaps most telling of all for a formal banking instrument: the letter carries no reference number and no return address, both of which I have seen present in authentic DBS correspondence, even draft copies. A document without these is not simply informal. In the context of institutional banking, it is incomplete by definition.

  • And signature characteristics are consistent with tracing rather than natural signing, with the signatory's title panel reading "Managing Director, DBS Bank Ltd" rather than her usual designation as Managing Director of the Institutional Banking Group.

These are not scan artefacts. They are the kinds of errors that should not survive a professional review, and the fact that they appear in letters threatening enforcement proceedings makes their presence not merely irregular, but inexplicable. Letters of this nature, carrying the weight of legal consequence for their recipient, would ordinarily pass through multiple layers of review before leaving the bank. Every reference number, every section label, every date, every signatory title would be verified. That is not a high standard. That is the minimum. The question that cannot be answered by attributing these errors to a poor quality scan is this: how does a letter threatening enforcement proceedings leave one of Southeast Asia's largest banks without a reference number, without a return address, with a deadline ninety years in the future, and with a signatory whose own title panel does not match her position? It does not.

What makes this case considerably more troubling, however, is that the institution tasked with independent verification was DBS's own Legal, Compliance and Secretariat department, the very same division that took almost eight weeks to initially refuse to authenticate the letters. Eight weeks is not a timeline consistent with any urgency. Because during those eight weeks, the bank's influential client did not wait. The transaction was completed. And by the time the refusal to authenticate arrived, it was, for all practical purposes, too late. The sequence of events raises a question that eight weeks of silence, followed by an eventual refusal, does not answer: at what point did DBS Legal, Compliance and Secretariat form the view that these letters could not be authenticated, and why did that view take eight weeks to arrive, when the anomalies now visible to an AI analysing a scanned copy were there from the beginning?

Whether 15-25% or 18% these numbers deserve to be taken seriously, not just in this case, but as a signal of something far larger. We are entering an era in which AI systems can serve as conflict-free witnesses in precisely the situations where human institutions are least able to be. They carry no client relationships, no credit exposure, no reputational stake in a particular outcome. They do not need eight weeks. Regulators in Singapore and beyond should take note. The Monetary Authority of Singapore has long positioned the city-state as a global standard-bearer for financial governance, yet cases like this expose a structural vulnerability that no amount of compliance paperwork can paper over: when the verifying institution is also the exposed creditor, there is a risk that the verification can be compromised.


What is needed is a formal framework that mandates truly independent document authentication in high-value transactions, one that includes AI-assisted analysis conducted by a party with no financial interest in the outcome, and whose findings are disclosed to all parties before, not after, a transaction is completed. The lesson of this case is not that AI is infallible, we know it's not. But an 18% probability of authenticity, returned in seconds by an algorithm with nothing to lose, may ultimately prove more honest than eight weeks of silence from those with everything to protect.


Ultimately, the Monetary Authority of Singapore cannot credibly champion AI objectivity in financial crime detection while declining to apply that same standard of objectivity to documented allegations within its own regulated sector. To embrace AI as a tool when it serves institutional interests, while dismissing its findings when they threaten them, would be a contradiction. The 18% was not returned by a disgruntled observer with an agenda. It was returned by the same category of technology MAS is now asking Singapore's banks to trust with the integrity of the financial system. If that technology is reliable enough to detect financial crime, it is reliable enough to ask uncomfortable questions about documents that have never been satisfactorily explained, and about the eight weeks of silence that allowed a transaction to complete before those questions could be answered.



********************************************************************************************************************* Coming Next: "Building AI Is Easy to Praise. Acting on Its Findings Is the Real Test" - The Letters Were Just the Beginning


This case was never simply about two letters. The question of their authenticity is a matter that concerns DBS, but what lies behind it reaches considerably further than any single institution. Allegations of forgery contained in an undisclosed writ, a financial incentive offered to hand over evidence and retract complaints, and conflicts that implicate Singapore's broader regulatory and legal institutions are matters that go well beyond the bank's role in authenticating correspondence. The letters were the visible surface. What sits beneath them, and who is accountable for it, has never been satisfactorily examined, let alone answered.

Which is why the arrival of AI as a conflict-free analytical tool is not merely a technological development. It is a test of institutional character. In my next article, "Building AI Is Easy to Praise. Acting on Its Findings Is the Real Test", I will take a deeper look at the roadblocks that can turn objective findings into subjective outcomes, the vested interests, the institutional inertia, and the quiet pressures that have a way of blunting inconvenient conclusions.


The question is no longer whether AI can be objective. It demonstrably can. But can it be trusted!



 
 
 

Comments


bottom of page