AI

How Analog Devices Is Turning Hardware Into Intelligence?

The upgraded CodeFusion Studio 2.0 simplifies how developers design, test and deploy AI on embedded systems.

Updated

November 27, 2025 3:26 PM

Illustration of CodeFusion Studio™ 2.0 showing AI, code and chip icons. PHOTO: ANALOG DEVICES, INC.

Analog Devices (ADI), a global semiconductor company, launched CodeFusion Studio™ 2.0 on November 3, 2025. The new version of its open-source development platform is designed to make it easier and faster for developers to build AI-powered embedded systems that run on ADI’s processors and microcontrollers.

“The next era of embedded intelligence requires removing friction from AI development”, said Rob Oshana, Senior Vice President of the Software and Digital Platforms group at ADI. “CodeFusion Studio 2.0 transforms the developer experience by unifying fragmented AI workflows into a seamless process, empowering developers to leverage the full potential of ADI's cutting-edge products with ease so they can focus on innovating and accelerating time to market”.

The upgraded platform introduces new tools for hardware abstraction, AI integration and automation. These help developers move more easily from early design to deployment.

CodeFusion Studio 2.0 enables complete AI workflows, allowing teams to use their own models and deploy them on everything from low-power edge devices to advanced digital signal processors (DSPs).

Built on Microsoft Visual Studio Code, the new CodeFusion Studio offers built-in checks for model compatibility, along with performance testing and optimization tools that help reduce development time. Building on these capabilities, a new modular framework based on Zephyr OS lets developers test and monitor how AI and machine learning models perform in real time. This gives clearer insight into how each part of a model behaves during operation and helps fine-tune performance across different hardware setups.

Additionally, the CodeFusion Studio System Planner has also been redesigned to handle more device types and complex, multi-core applications. With new built-in diagnostic and debugging features — like integrated memory analysis and visual error tracking — developers can now troubleshoot problems faster and keep their systems running more efficiently.

This launch marks a deeper pivot for ADI. Long known for high-precision analog chips and converters, the company is expanding its edge-AI and software capabilities to enable what it calls Physical Intelligence — systems that can perceive, reason, and act locally.  

“Companies that deliver physically aware AI solutions are poised to transform industries and create new, industry-leading opportunities. That's why we're creating an ecosystem that enables developers to optimize, deploy and evaluate AI models seamlessly on ADI hardware, even without physical access to a board”, said Paul Golding, Vice President of Edge AI and Robotics at ADI. “CodeFusion Studio 2.0 is just one step we're taking to deliver Physical Intelligence to our customers, ultimately enabling them to create systems that perceive, reason and act locally, all within the constraints of real-world physics”.

Keep Reading

AI

Why MicroCloud Hologram Is Bringing Quantum Computing Into the Future of 3D Modeling

Rethinking 3D modelling for a world that generates too much, too quickly.

Updated

December 5, 2025 3:46 PM

A hologram in the franchise Star Wars, in Walt Disney World Resort, Orlando. PHOTO: UNSPLASH

MicroCloud Hologram Inc. (NASDAQ: HOLO), a technology service provider recognized for its holography and imaging systems, is now expanding into a more advanced realm: a quantum-driven 3D intelligent model. The goal is to generate detailed 3D models and images with far less manual effort — a need that has only grown as industries flood the world with more visual data every year.

The concept is straightforward, even if the technology behind it isn’t. Traditional 3D modeling workflows are slow, fragmented and depend on large teams to clean datasets, train models, adjust parameters and fine-tune every output. HOLO is trying to close that gap by combining quantum computing with AI-powered 3D modeling, enabling the system to process massive datasets quickly and automatically produce high-precision 3D assets with much less human involvement.

To achieve this, the company developed a distributed architecture comprising of several specialized subsystems. One subsystem collects and cleans raw visual data from different sources. Another uses quantum deep learning to understand patterns in that data. A third converts the trained model into ready-to-use 3D assets based on user inputs. Additional modules manage visualization, secure data storage and system-wide protection — all supported by quantum-level encryption. Each subsystem runs in its own container and communicates through encrypted interfaces, allowing flexible upgrades and scaling without disrupting the entire system.

Why this matters: Industries ranging from gaming and film to manufacturing, simulation and digital twins are rapidly increasing their reliance on 3D content. The real bottleneck isn’t creativity — it’s time. Producing accurate, high-quality 3D assets still requires a huge amount of manual processing. HOLO’s approach attempts to lighten that workload by utilizing quantum tools to speed up data processing, model training, generation and scaling, while keeping user data secure.

According to the company, the system’s biggest advantages include its ability to handle massive datasets more efficiently, generate precise 3D models with fewer manual steps, and scale easily thanks to its modular, quantum-optimized design. Whether quantum computing will become a mainstream part of 3D production remains an open question. Still, the model shows how companies are beginning to rethink traditional 3D workflows as demand for high-quality digital content continues to surge.