From weather data to decisions
From weather data to decisions
"72°F and partly cloudy" is data. It is not an answer to the question you actually asked.
The question you actually asked was one of these:
Should I spray the orchard today?
Is tomorrow good for a long run?
Will the wedding ceremony be comfortable outdoors at 4 PM?
Is it safe to take the boat out Saturday morning?
When is the golden-hour light going to be worth driving an hour for?
Should I cover the tomatoes tonight?
Is this a migraine-risk day for my kid?
None of those are answered by a temperature and an icon. They're answered by a small model that takes the weather context and applies a domain rule set. That's what the DewLogic Decision Engine is.
The modules
We ship several first-party decision modules, each with its own scoring logic:
Activity scoring. Running, cycling, hiking, tennis, golf, pickleball, kayaking, skiing, dog walks, and more. Each activity has a different comfort envelope and different blocking conditions.
Agriculture. Spray windows (wind, humidity, rain proximity), growing-degree-day accumulation, frost risk windows, evapotranspiration, leaf wetness duration.
Hazard analysis. Heat index, wind chill, lightning proximity, flash flood risk, wildfire smoke, ice events.
Health advisory. UV exposure, pollen count, air quality, pressure swings (migraine risk), humidity effects, user-configurable sensitivities.
Marine analysis. Small craft advisory thresholds, sea state, wind direction vs. fetch, fog risk.
Photography planning. Golden hour timing, cloud cover quality at sunrise and sunset, atmospheric clarity, storm chase windows.
Recommendation generation. Combines the above into a ranked list of "good for" and "avoid" suggestions tailored to the day.
What they all share
Every module consumes the same inputs: the current Virtual Station, the forecast (hourly + daily), and the user profile (favorites, hidden activities, unit preferences, sensitivities). Every module produces the same output shape: a score, a one-line verdict, and a structured list of reasons.
This consistent shape matters because the UI can render any module the same way (a card with a score, a verdict, and expandable reasons) and because the LLM can consume the output directly as context.
Example: the spray window
A good spray window for a farmer is not "no rain in the forecast." It's more like:
Wind between 3 and 10 mph (too calm = poor coverage, too windy = drift)
Relative humidity above 50% (low humidity = droplets evaporate before hitting the leaf)
No precipitation in the next 4 hours (avoid washoff)
Temperature below 85°F (heat volatilizes certain chemistries)
Temperature inversion not active (inversions trap and displace the spray unpredictably)
The agriculture module walks the forecast hour by hour, evaluates all five conditions simultaneously, and surfaces the contiguous windows where they all hold. Then it ranks them.
A weather app that just tells you "it's windy tomorrow" is leaving the work to you. This one does the work.
Example: photography golden hour
Any app can tell you when sunset is. A useful photography module tells you:
Sunset time
Cloud cover percentage at sunset, mid-level vs. low-level
Whether those clouds are in a position to catch color (clear western horizon with high clouds is the dream; overcast is dead)
Atmospheric clarity (recent precipitation and low humidity help, haze hurts)
A composite score that answers "is this going to be a scroll-worthy sunset"
The inputs are all in the Virtual Station and forecast. The rules encode what photographers know about light. The output is a score plus reasons.
Why this is the LLM's best friend
If you ask DewLogic a natural-language question, the local LLM doesn't receive a wall of raw JSON. It receives the relevant Decision Engine output (the score, the verdict, the reasons) plus the necessary Virtual Station context. The model's job is to translate and contextualize, not to reinvent agricultural chemistry from first principles.
This keeps local small models sharp and correct on domains that would otherwise make them hallucinate. The rules live in code. The LLM handles tone, framing, and follow-up questions.
Why we built this ourselves
Most weather products that claim "AI-powered insights" are passing raw data to an LLM and hoping it makes sense of it. That works until the model confidently tells a farmer to spray in 15 mph winds because it didn't know the threshold. The only way to avoid that is to encode the domain knowledge separately and give the model a structured answer to translate.
That's not new. That's how every good expert system has always worked. It just hasn't been standard in consumer weather apps.
Each module took serious domain research (we talked to actual farmers, actual sailors, actual photographers). Happy to share what we learned about any of them.


Replies