About A24 Labs

A24 Labs is a small, focused research group exploring the technologies shaping the next decade. We operate at the intersection of artificial intelligence, cybersecurity, quantum computing, and regulatory compliance.

We build in the open, publish what we learn, and ship tools that matter. Our work spans foundational research, applied engineering, and the connective tissue between the two.

How We Work

We are researchers and engineers first. We believe the best insights come from building real systems, not from observing them from a distance. Every research question we pursue is grounded in a practical problem.

Our team is deliberately small. We prefer depth over breadth, quality over speed, and understanding over output. We collaborate closely with each other and with the broader community — through open-source contributions, published research, and public discourse.

What We Believe

Technology is moving faster than the institutions built to govern it. The gap between what is technically possible and what is responsibly deployed grows wider every year. We exist to help close that gap — by building tools, producing research, and sharing what we learn along the way.

Our Vision

We believe the most important problems of the next decade will sit at the intersection of our four research pillars — not within any one of them in isolation.

AI systems that cannot be secured are liabilities. Quantum advances that ignore cryptographic implications are irresponsible. Compliance frameworks that do not account for AI and quantum are already obsolete. The future belongs to organizations that can think across these boundaries.

Long-Term Direction

Our goal is to become a trusted source of applied research and practical tooling for organizations navigating the convergence of AI, security, quantum computing, and compliance. We are not building toward a single product — we are building a body of work.

What Guides Us

Rigor over speed. We would rather publish one well-tested finding than ten speculative ones. Research that cannot be reproduced or applied is not research — it is marketing.

Open by default. We share our methods, our data where possible, and our conclusions. Proprietary knowledge hoarding makes everyone less safe.

Practical impact. Every project we undertake must connect to a real problem. We measure success not by citations or downloads, but by whether our work helps someone make a better decision.

Honest assessment. We call out hype when we see it, including in our own domains. The gap between what is promised and what is delivered in technology is a problem we take seriously.

Our Team

A deliberately small group of researchers and engineers working across our four pillars.

Jordan Ellis

Jordan Ellis

Founder & Principal Researcher

Focused on the intersection of AI systems and security infrastructure. Previously built ML platforms at scale and led security research teams. Believes the best research comes from building real things.

AISecurity
Maya Chen

Maya Chen

Quantum Computing Researcher

Researching post-quantum cryptography migration strategies and hybrid classical-quantum systems. Background in theoretical physics and applied cryptography. Thinks rigorously about what quantum can and cannot do.

QuantumSecurity
Alex Reeves

Alex Reeves

Compliance & AI Ethics Lead

Working on policy-as-code frameworks and AI governance models. Former regulatory analyst turned engineer. Passionate about making compliance programmable and genuinely useful.

ComplianceAI