The tools and data standards we use and contribute to, from federated ICU data formats to open-source fairness auditing. Most of this is built in the open.
“Federation, not centralization: models and results cross institutional boundaries, raw patient data never does.”
Rush is a founding site in the CLIF Consortium. CLIF is an open-source standard for longitudinal ICU data and privacy-preserving multicenter research — purpose-built so sites spend less time on one-off harmonization. The network spans 12 institutions, 62 hospitals, and 808K+ patients (figures as published on clif-icu.com).
Reproducible pipelines for cohort construction, phenotyping, and causal inference across federated ICU datasets.
Interactive dashboards for exploratory cohort analysis and study reporting across multi-center datasets.
The CLIF Consortium (Common Longitudinal ICU data Format) is an open-source standard for longitudinal ICU data and privacy-preserving multicenter research, published in Intensive Care Medicine (2025). Rush is a founding site. The network spans 12 institutions, 62 hospitals, and 808K+ ICU patients (as published on clif-icu.com). The harmonized relational model captures the temporal depth of an ICU stay so analyses can be written once and run across sites without centralizing raw patient data. The canonical table and variable documentation lives in the CLIF 2.1.0 data dictionary.
A harmonized relational model for the temporal complexity of ICU care — vitals, labs, medications, respiratory support, microbiology, procedures, and more. Standardized variables and terminologies (including mCIDE) support reproducible multicenter work.
Federated analytics: only aggregate results cross institutional boundaries. Patient-level data stays at each site — collaboration without central raw-data repositories.
Site-specific ETL maps local EHR data into CLIF; specifications, pipelines, and related tooling are open source under Apache License 2.0 on GitHub, alongside the published data dictionary.
Network at a Glance
FairCareAI is a Python package for auditing machine learning models for fairness in clinical settings. It is built on the Van Calster et al. (2025) methodology and aligned with the CHAI RAIC governance framework, so health system teams can bring evidence-based fairness analysis to governance and clinical stakeholders.
Package suggests, humans decide. FairCareAI produces metrics and visualizations for review; deployment and policy choices remain with your institution and committees.
Install
pip install faircareaiOptional exports (PDF, PowerPoint, PNG bundles): pip install "faircareai[export]". See the repository for Playwright/Chromium setup for PDF generation.
FairCareAI supports CHAI-grounded fairness review; all outputs are advisory. Validate results in your local context before any clinical or operational use. Software is provided as-is; see the project license and documentation on GitHub.
Whether you are a clinician, engineer, or researcher, our infrastructure is designed to be open, reproducible, and collaborative. Reach out to learn how RICCC tools can power your next study.
Contact the Lab