What Overstory’s vegetation intelligence reveals about wildfire and outage risk.
Updated
January 15, 2026 8:03 PM

Aerial photograph of a green field. PHOTO: UNSPLASH
Managing vegetation around power lines has long been one of the biggest operational challenges for utilities. A single tree growing too close to electrical infrastructure can trigger outages or, in the worst cases, spark fires. With vast service territories, shifting weather patterns and limited visibility into changing landscape conditions, utilities often rely on inspections and broad wildfire-risk maps that provide only partial insight into where the most serious threats actually are.
Overstory, a company specializing in AI-powered vegetation intelligence, addresses this visibility gap with a platform that uses high-resolution satellite imagery and machine-learning models to interpret vegetation conditions in detail.Instead of assessing risk by region, terrain type or outdated maps, the system evaluates conditions tree by tree. This helps utilities identify precisely where hazards exist and which areas demand immediate intervention—critical in regions where small variations in vegetation density, fuel type or moisture levels can influence how quickly a spark might spread.
At the core of this technology is Overstory’s proprietary Fuel Detection Model, designed to identify vegetation most likely to ignite or accelerate wildfire spread. Unlike broad, publicly available fire-risk maps, the model analyzes the specific fuel conditions surrounding electrical infrastructure. By pinpointing exact locations where certain fuel types or densities create elevated risk, utilities can plan targeted wildfire-mitigation work rather than relying on sweeping, resource-heavy maintenance cycles.
This data-driven approach is reshaping how utilities structure vegetation-management programs. Having visibility into where risks are concentrated—and which trees or areas pose the highest threat—allows teams to prioritize work based on measurable evidence. For many utilities, this shift supports more efficient crew deployment, reduces unnecessary trims and builds clearer justification for preventive action. It also offers a path to strengthening grid reliability without expanding operational budgets.
Overstory’s recent US$43 million Series B funding round, led by Blume Equity with support from Energy Impact Partners and existing investors, reflects growing interest in AI tools that translate environmental data into actionable wildfire-prevention intelligence. The investment will support further development of Overstory’s risk models and help expand access to its vegetation-intelligence platform.
Yet the company’s focus remains consistent: giving utilities sharper, real-time visibility into the landscapes they manage. By converting satellite observations into clear and actionable insights, Overstory’s AI system provides a more informed foundation for decisions that impact grid safety and community resilience. In an environment where a single missed hazard can have far-reaching consequences, early and precise detection has become an essential tool for preventing wildfires before they start.
Keep Reading
The focus is no longer just AI-generated worlds, but how those worlds become structured digital products
Updated
February 20, 2026 6:50 PM

The inside of a pair of HTC VR goggles. PHOTO: UNSPLASH
As AI tools improve, creating 3D content is becoming faster and easier. However, building that content into interactive experiences still requires time, structure and technical work. That difference between generation and execution is where HTC VIVERSE and World Labs are focusing their new collaboration.
HTC VIVERSE is a 3D content platform developed by HTC. It provides creators with tools to build, refine and publish interactive virtual environments. Meanwhile, World Labs is an AI startup founded by researcher Fei-Fei Li and a team of machine learning specialists. The company recently introduced Marble, a tool that generates full 3D environments from simple text, image or video prompts.
While Marble can quickly create a digital world, that world on its own is not yet a finished experience. It still needs structure, navigation and interaction. This is where VIVERSE fits in. By combining Marble’s world generation with VIVERSE’s building tools, creators can move from an AI-generated scene to a usable, interactive product.
In practice, the workflow works in two steps. First, Marble produces the base 3D environment. Then, creators bring that environment into VIVERSE, where they add game mechanics, scenes and interactive elements. In this model, AI handles the early visual creation, while the human creator defines how users explore and interact with the world.
To demonstrate this process, the companies developed three example projects. Whiskerhill turns a Marble-generated world into a simple quest-based experience. Whiskerport connects multiple AI-generated scenes into a multi-level environment that users navigate through portals. Clockwork Conspiracy, built by VIVERSE, uses Marble’s generation system to create a more structured, multi-scene game. These projects are not just demos. They serve as proof that AI-generated worlds can evolve beyond static visuals and become interactive environments.
This matters because generative AI is often judged by how quickly it produces content. However, speed alone does not create usable products. Digital experiences still require sequencing, design decisions and user interaction. As a result, the real challenge is not generation, but integration — connecting AI output to tools that make it functional.
Seen in this context, the collaboration is less about a single product and more about workflow. VIVERSE provides a system that allows AI-generated environments to be edited and structured. World Labs provides the engine that creates those environments in the first place. Together, they are testing whether AI can fit directly into a full production pipeline rather than remain a standalone tool.
Ultimately, the collaboration reflects a broader change in creative technology. AI is no longer only producing isolated assets. It is beginning to plug into the larger process of building complete experiences. The key question is no longer how quickly a world can be generated, but how easily that world can be turned into something people can actually use and explore.