Introducing Kepler
AI is extraordinary at reasoning. It interprets intent, synthesizes information, and generates language better than any technology in history.
But when it comes to retrieving facts and computing numbers, AI wasn’t designed for that job. Ask the same question twice and you might get different answers. A generated number sounds exactly like a verified one. There’s no tell.
Every major industry runs on numbers. A misread destroys a patient’s diagnosis. A wrong decimal bankrupts a trade. A fabricated source torpedoes a deal. In healthcare, legal, insurance, government, the cost of a wrong number isn’t a rounding error. It’s a lawsuit, a missed diagnosis, a failed mission. In finance, it’s millions of dollars and a career.
Right now, none of these industries can fully trust AI for the decisions that actually matter. Not because AI is bad, but because it’s being asked to do things it wasn’t built to do.
Over the past year, we talked to 137 financial firms. VPs, MDs, analysts, associates across private equity, hedge funds, and investment banks. Same story everywhere: everyone wants to use AI, nobody trusts it. Impressive demos that fall apart in production. Numbers that look right but aren’t. Sources that don’t exist.
One MD at a multi-billion dollar fund: “I can’t put a number in front of a client if I can’t show where it came from.”
An associate at a PE firm: “I can’t trust what I can’t verify.”
I’m an EMT. I’ve worked 12+ hour shifts and watched what happens when someone gets a number wrong. A decimal in the wrong place. A dosage misread. In medicine, bad data hurts people. Sometimes it kills them. Now we’re entering an age where AI-generated numbers look identical to verified ones, and there’s no built-in way to tell the difference.
Everyone’s trying to solve hallucination by making AI smarter. More training data, better retrieval, more guardrails. These approaches reduce errors. They don’t eliminate them.
We had a different idea: if the AI doesn’t produce numbers, it can’t get them wrong.
This isn’t about limiting AI. It’s about using it for what it’s best at. AI is extraordinary at understanding what you’re asking: parsing intent, handling ambiguity, interpreting natural language in ways that felt like science fiction five years ago. But retrieving facts and doing math are deterministic tasks, and deterministic tasks deserve deterministic tools. So we let each system do what it does best.
When you ask Kepler a question, AI interprets your intent. Then deterministic code retrieves the data and runs the calculation. The AI doesn’t produce numbers. It can’t get them wrong. Every figure traces to its source. Every answer is reproducible.
The AI interprets. Code executes. They never cross lanes.
I spent seven years at Palantir building data infrastructure for defense and intelligence, the kind of systems where a wrong number doesn’t just cost money, it costs lives and compromises missions. Later, I became Citadel’s first Head of Business Engineering. My co-founder John spent eleven years at Palantir building the products that sat between raw data and real users. We’ve been building toward this, in different forms, for fifteen years. We learned the same lesson in every deployment: the magic isn’t in the database. It’s in the layer between the data and the user.
Our team includes senior engineers from Palantir and Citadel, engineers from Meta, Bloomberg, and Stanford, and financial experts who’ve built $100M+ businesses. We’re backed by people who built the foundational data platforms of the last decade: Jordan Tigani (MotherDuck), Tristan Handy (dbt), Savin Goyal (Outerbounds), and investors from Pebblebed, including the founders of Facebook AI Research and OpenAI.
In 1601, Johannes Kepler inherited twenty years of astronomical observations from Tycho Brahe. The most accurate dataset of planetary motion ever assembled. For centuries, astronomers assumed planets moved in perfect circles. Elegant and wrong. Kepler had data he could trust, so he followed it where it led. The math didn’t work for circles. It worked for ellipses.
That’s the power of trusted data. It reveals truths that would otherwise stay hidden.
We’re building the infrastructure that gives AI the same foundation - to enable everyone to build on top of a platform they can truly trust. Verified data, traceable to its source, precise enough to follow wherever it leads.
What gets discovered next is up to you.
Reach us at hi@kepler.ai
Read more: Trust in the Age of AI explores how forcing AI to show its work changes the nature of the output itself. And in Context Is the Easy Part, we explain why context engineering is really an engineering problem.