
Aether
Healthcare makes sense when data has memory!
2 followers
Healthcare makes sense when data has memory!
2 followers
Aether's an AI-powered longitudinal health platform that combines a patient-owned health record with a lightweight EHR for doctors. It ingests fragmented medical data from labs, hospitals, clinics, and documents; organizes it into a longitudinal health graph for each individual. By learning from data over time, Aether surfaces trends, changes, and early risk signals missed in episodic care. Built for India’s fragmented healthcare system, Aether is already used by 25,000+ patients, 3 hospitals





One idea that deeply shaped Aether comes from rare disease research.
A recent npj Digital Medicine paper shows that rare diseases are underdiagnosed not because data is missing, but because medical context does not compound over time. Diagnoses are provisional. Labels are noisy. Learning has to be longitudinal.
This resonated personally. My mother was diagnosed with an extremely rare cancer only days before her death. Her records existed, but no one ever had a complete picture.
I wrote a longer reflection on what rare disease AI teaches us about longitudinal health here:
https://myaether.live/blog/rare-disease-ai-longitudinal-health
Appreciate the PH community engaging with this perspective.
One of the ideas shaping Aether comes from longitudinal symptom research in oncology.
A recent JCO Clinical Cancer Informatics study shows that future cancer symptom severity can be predicted using sparse, irregular EHR nursing documentation, as long as symptom history is preserved over time.
The key insight is not the model. It is that learning only works when health systems stop throwing away longitudinal context.
I wrote a short reflection on what this teaches us about health graphs and EHR Lite design here:
https://myaether.live/blog/predicting-cancer-symptom-trajectories-longitudinal-ehr
Appreciate the PH community engaging with this perspective.
We’ve been thinking a lot about representation in healthcare AI.
A recent npj Digital Medicine paper built a multimodal sepsis embedding model that outperformed baseline models and even physicians in mortality prediction.
The deeper insight is not about sepsis. It’s about representation learning.
If admission-level embeddings can unlock this much signal, what happens when AI has longitudinal context across years?
We wrote a breakdown here:
https://myaether.live/blog/sepsis-ai-representation-longitudinal-future
Would love thoughts from the PH community.