Joseph Plazo’s Radical Take on the Limits of Intelligence—Artificial and Otherwise
Joseph Plazo’s Radical Take on the Limits of Intelligence—Artificial and Otherwise
Blog Article
It was supposed to be a coronation of machine supremacy. Instead, it became a confrontation.
MANILA — At the heart of the Philippines’ premier university, Asia’s brightest students—engineers, economists, AI researchers—converged to explore the future of investing through algorithms.
They expected Plazo to preach automation, unveil breakthroughs, and fan their enthusiasm.
Instead, they got silence, contradiction, and truth.
---
### The Sentence That Changed the Room
He’s built AI systems with mythic win rates.
The moment he began speaking, the room settled.
“AI can beat the market. But only if you teach it when not to try.”
The murmurs stopped.
It challenged everything the crowd believed.
---
### A Lecture or a Lament?
Plazo didn’t pitch software.
He displayed machine misfires—algorithms buying at peaks, shorting at troughs, mistaking irony for euphoria.
“These systems are reflections—not predictions.”
Then, with a pause that felt like a punch, he asked:
“ Can it compute the panic of dominoes falling on Wall Street? Not the charts. The *emotion*.”
You could hear a breath fall.
---
### But What About Conviction?
Of course, the audience pushed back.
A PhD student from Kyoto noted how large language models now detect emotion in text.
Plazo nodded. “Feeling isn’t forecasting.”
A data scientist from HKUST proposed that probabilistic models could one day simulate conviction.
Plazo’s reply was metaphorical:
“You can simulate weather. But conviction? That’s lightning. You can’t forecast where it’ll strike. Only feel when it does.”
---
### When Faith Replaces Thinking
Plazo’s core argument wasn’t that AI is broken. It’s that humans are outsourcing responsibility.
“People are worshipping outputs like oracles.”
Still, he clarified: AI belongs in the cockpit—not in the captain’s seat.
His company’s systems scan sentiment, order flow, and liquidity.
“But every output is double-checked by human eyes.”
He paused, then delivered the future’s scariest phrase:
“‘The model told me to do it.’ That’s what we’ll hear after every disaster in the next decade.”
---
### Why This Message Stung Harder in the East
Across Asian tech hubs, AI is gospel.
Dr. Anton Leung, a Singapore-based ethicist, whispered after the talk:
“He reminded us: tools without ethics are just sharp objects.”
In a private dialogue among professors, Plazo pressed the point:
“Don’t just teach students to *code* AI. Teach them to *think* with it.”
---
### Sermon on the Market
The ending was elegiac, not technical.
“The market isn’t math,” he said. read more “ It’s a tragedy, a comedy, a thriller—written by humans. And if your AI can’t read character, it’ll miss the plot.”
No one moved.
Some said it reminded them of Jobs at Stanford.
He came to remind us: we are still responsible.