avatar
Benjamin L. Rockefeller
AI Researcher
Nanyang Technological University



In my darkest hour, AI lit the way—showing me how once-impossible ideas could take shape, and revealing its greatest power: helping us make better decisions amid complexity and uncertainty. That moment changed how I see the world—and what I believe AI should really be about.

To realize that vision, reliability must come first. My initial focus is addressing hallucinations—those moments when AI speaks with confidence but delivers falsehoods. I'm working on tools that help people make smarter decisions—in law, finance, and other fields where mistakes can be costly. My long-term goal is far more ambitious: I want AI to move beyond linear generation—to reason, adapt, and connect ideas across complex contexts, until it can truly think, not just generate.

If each of us has one mission in life, then mine is clear: to devote my life to building the future of AI.

Research Interests | Building a Hallucination-Resistant Architecture for LLMs

While large language models (LLMs) have made significant strides in generating fluent text, hallucination remains a critical issue in high-stakes applications—where the absence of factual grounding, logical inconsistencies, and unverifiable outputs undermine reliability. This exposes fundamental flaws in inference control, cognitive feedback, and knowledge alignment.

To address this challenge, I propose a three-tiered, progressive hallucination-resistant framework, centered around a Planning-Reflection-Evidence feedback loop, focusing on the following three key components:

  • SFPL (Structured Feedback Planning Layer): By implementing pre-generation reasoning plans, we replace free-form generation with precise control over the initial steps, mitigating generation bias
  • REFLEXION: This introduces recursive self-feedback and cognitive evaluation, enabling dynamic correction of cognitive errors during the generation process
  • CAT (Citation-Aware Transformer): Integrating verifiable sources directly into the reasoning process to ensure factual grounding

This architecture significantly improves the reliability of LLMs, advancing them from "statistical generation" to "causal-reasoning-driven cognitive systems," providing both a theoretical foundation and practical solutions for hallucination mitigation. It supports the trusted deployment, accountability, and verifiability of large models in high-risk scenarios.

Hallucination-Resistant Architecture

Developing a three-tiered framework to mitigate hallucinations in large language models:

  • SFPL - Structured Feedback Planning Layer
  • REFLEXION - Recursive self-feedback
  • CAT - Citation-Aware Transformer

AI Reasoning Framework

A cognitive architecture that enables AI systems to reason, adapt, and connect ideas across complex contexts.

Financial AI Advisor

An AI system designed for high-stakes financial decision making with built-in hallucination detection.

Legal AI Assistant

AI-powered legal research assistant with citation verification and error correction capabilities.

Medical Diagnosis AI

Reliable AI system for medical diagnosis with explainable reasoning and source verification.

Educational AI Tutor

Adaptive learning system that provides accurate, verifiable explanations across STEM subjects.

Beyond Linear Generation

Adaptive Reasoning in Large Language Models: Moving beyond statistical generation to causal-reasoning-driven cognitive systems.

Status: In review at NeurIPS 2024

SFPL Architecture

Structured Feedback Planning Layer: A novel approach to pre-generation reasoning plans for LLMs.

Status: Submitted to JMLR

CAT Transformer

Citation-Aware Transformer: Integrating verifiable sources directly into the reasoning process.

Status: Preprint available

REFLEXION System

Recursive self-feedback and cognitive evaluation for dynamic error correction in LLMs.

Status: In preparation

"In my darkest hour, AI lit the way—showing me how once-impossible ideas could take shape, and revealing its greatest power: helping us make better decisions amid complexity and uncertainty."

Current Explorations

  • Cognitive Architectures: Developing AI that can "think" rather than just generate
  • Uncertainty Quantification: Methods for AI to express confidence in its outputs
  • Cross-Domain Reasoning: Enabling AI to connect concepts across disparate fields
  • Ethical AI Frameworks: Building accountability into AI systems

Experimental Concepts

Neural-Symbolic Integration

Combining neural networks with symbolic reasoning for more reliable AI

AI Self-Auditing

Systems that can evaluate and explain their own decision processes

Causal Reasoning Modules

Enhancing LLMs with explicit causal inference capabilities

The Future of Reliable AI

Keynote presentation at AI Reliability Summit 2024

View Presentation

Mitigating Hallucinations

Journal of AI Ethics, Vol. 8, Issue 3

Read Article

AI in High-Stakes Environments

Workshop on Trustworthy AI Systems

View Workshop

From Generation to Reasoning

Interview with AI Research Podcast

Listen Now