The Keynote That Shattered AI Myths in Manila
The Keynote That Shattered AI Myths in Manila
Blog Article
Everyone expected triumph. But what happened instead left the audience reeling.
MANILA — At the heart of the Philippines’ premier university, Asia’s brightest students—engineers, economists, AI researchers—converged to see the future of trading laid bare by machines.
They expected Plazo to preach automation, unveil breakthroughs, and fan their enthusiasm.
They were wrong.
---
### The Sentence That Changed the Room
Joseph Plazo is no stranger to accolades.
As he stepped onto the podium, the room quieted.
“AI can beat the market. But only if you teach it when not to try.”
A chill passed through the room.
It wasn’t a thesis. It was a riddle.
---
### Dismantling the Myth of Machine Supremacy
There were no demos, no dashboards, no datasets.
He showed failures— bots confused by sarcasm, making billion-dollar errors in milliseconds.
“Most AI is trained on yesterday. Investing happens tomorrow.”
Then, with a pause that felt like a punch, he asked:
“Can your AI feel the fear of 2008? Not the charts. The *emotion*.”
No one answered. They weren’t supposed to.
---
### But What About Conviction?
They didn’t sit quietly. These were doctoral minds.
A PhD student from Kyoto noted how large check here language models now detect emotion in text.
Plazo nodded. “Knowing someone’s angry doesn’t tell you why—or what comes next.”
A data scientist from HKUST proposed that probabilistic models could one day simulate conviction.
Plazo’s reply was metaphorical:
“You can simulate weather. But conviction? That’s lightning. You can’t forecast where it’ll strike. Only feel when it does.”
---
### When Faith Replaces Thinking
Plazo’s core argument wasn’t that AI is broken. It’s that humans are outsourcing responsibility.
“People are worshipping outputs like oracles.”
Still, he clarified: AI belongs in the cockpit—not in the captain’s seat.
His company’s systems scan sentiment, order flow, and liquidity.
“But every output is double-checked by human eyes.”
He paused, then delivered the future’s scariest phrase:
“‘The model told me to do it.’ That’s what we’ll hear after every disaster in the next decade.”
---
### Why This Message Stung Harder in the East
In Asia, automation is often sacred.
Dr. Anton Leung, a Singapore-based ethicist, whispered after the talk:
“This was theology, not technology.”
That afternoon, over tea and tension, Plazo pressed the point:
“Don’t just teach students to *code* AI. Teach them to *think* with it.”
---
### Sermon on the Market
The ending was elegiac, not technical.
“The market isn’t math,” he said. “ It’s human, messy, unpredictable. And if your AI can’t read character, it’ll miss the plot.”
And then, slowly, they stood.
Others compared it to hearing Taleb for the first time.
He came to remind us: we are still responsible.