The Lung That Failed and What It Taught Us About AI

How a $13B biotech company turned a wall of dead data into a predictive engine for saving lives, with help from Quansight & OpenTeams.

It started with a lung

It looked healthy. It had passed every test. Then, at the very last moment—after weeks of preparation—it failed.

No warning. No visible defects. Just a quiet blemish, invisible until it was too late. Every failed lung meant starting over. Cost, time, risk—reset.

That lung would never save a life.

One biotech company was building lab-grown, transplant-ready human lungs. And every failure was a mystery buried in a swamp of sensor data.

They knew the signals were in there. But their tools couldn’t see them.

The Problem: Data Couldn’t Keep Up

Every decellularization run created a storm of biological data: pressure curves, timing pulses, second-by-second readouts from dozens of sensors. But those signals were stored in clunky CSV files across brittle systems. Just opening a file could take ten minutes. Building a predictive model? Forget it. The infrastructure simply couldn’t keep up with the biology.

That’s when they called Quansight Consulting—recently merged with OpenTeams, both founded by open source pioneer Travis Oliphant.

Why The Stack Wouldn’t See The Signals

The Fix: Infrastructure That Works

When Quansight stepped in, we brought a new way of thinking.

Before we talked about AI, we talked about plumbing. Because no algorithm can run on a broken pipe.

  • From CSV to Parquet: Data was transformed into columnar format. Suddenly, insights loaded in seconds, not hours.
  • Nebari platform deployed: A scalable, shareable data science platform took root—version-controlled, reproducible, and ready for experimentation.
  • Web app tuned: Visualizations snapped to life. It used to take minutes just to show a small portion of data. Now it takes seconds to deeply explore the entire dataset.
  • Dev environment stabilized: Goodbye, dependency hell. Hello, smooth workflows with Conda and Mamba.


Only once the lab could breathe did we turn to prediction.

The Breakthrough: A Model That Could See What Eyes Couldn’t

Once the infrastructure was humming, it was time to ask the question that had haunted the lab for years: Can we predict which lungs will fail?

Our data scientists dove in.

  • They engineered features based on every phase of the decell process.
  • They built logistic regression and tree-based models to predict three common failure types: blebs, bullae, and blemishes.
  • They used causal discovery techniques—not just correlations, but models that explained why.
  • And crucially, they made it interpretable. No black boxes. Every prediction was traceable, understandable, and testable.


Then it happened.

A lung failed—just as the model predicted. Only this time, it wasn’t a surprise. Teams could adapt. Investigate. Modify protocols.

The team could respond instead of react.

Why it worked

We delivered clarity because we listened. We met with their scientists to understand what they needed—then we delivered.

When the stakes are this high, black boxes don’t cut it.

You need transparent systems that your team can understand.

Before

After

CSV bottlenecks: 10–15 min/file

Parquet + parallel reads

Manual QA, manual failure review

Reproducible, versioned environments (Nebari)

Unstructured environments

Unstructured environments

Unstructured environments

Unstructured environments

No ML

Proactive failure response

Now, It’s Your Turn

You’ve got smart people. But smart people can’t outrun broken systems.

Don’t wait for your next failure to start fixing the pipeline.

Let’s talk before the data goes silent again.

Share:

Related Articles

Panel 1: 10:00 AM – 10:45 AM

The Global AI Arms Race