AI’s expansion into the physical world is reshaping what investors choose to back
Updated
February 12, 2026 1:21 PM

Exterior view of the Exchange Square in Central, Hong Kong. PHOTO: UNSPLASH
Artificial intelligence is often discussed in terms of large models trained in distant data centres. Less visible, but increasingly consequential, is the layer of computing that enables machines to interpret and respond to the physical world in real-time. As AI systems move from abstract software into vehicles, cameras and factory equipment, the chips that power on-device decision-making are becoming strategic assets in their own right.
It is within this shift that Axera, a Shanghai-based semiconductor company, began trading on the Hong Kong Stock Exchange on February 10 under the ticker symbol 00600.HK. The company priced its shares at HK$28.2, debuting with a market capitalization of approximately HK$16.6 billion. Its listing marks the first time a Chinese company focused primarily on AI perception and edge inference chips has gone public in the city — a milestone that underscores growing investor interest in the hardware layer of artificial intelligence.
The listing comes at a time when demand for flexible, on-device intelligence is expanding. As manufacturers, automakers and infrastructure operators integrate AI into physical systems, the need for specialized processors capable of handling visual and sensor data efficiently has grown. At the same time, China’s domestic semiconductor industry has faced increasing pressure to build local capabilities across the chip value chain. Companies such as Axera sit at the intersection of these dynamics, serving both commercial markets and broader industrial policy priorities.
For Hong Kong, the debut adds to a cohort of technology companies seeking public capital to scale hardware-intensive businesses. Unlike software firms, semiconductor designers operate in a capital-intensive environment shaped by supply chains, fabrication partnerships and rapid product cycles. Their presence on the exchange reflects a maturing investor appetite for AI infrastructure, not just consumer-facing applications.
Axera’s early backer, Qiming Venture Partners, led the company’s pre-A financing round in 2020 and continued to participate in subsequent rounds. Prior to the IPO, it held more than 6 percent of the company, making it the second-largest institutional investor. The public offering provides liquidity for early investors and new funding for a company operating in a highly competitive and technologically demanding sector.
Axera’s market debut does not resolve the competitive challenges of the semiconductor industry, where innovation cycles are short and global competition is intense. But it does signal that investors are placing tangible value on the hardware, enabling AI’s expansion beyond the cloud. In that sense, the listing represents more than a corporate milestone; it reflects a broader transition in how artificial intelligence is built, deployed and financed — moving steadily from software abstraction toward the silicon that makes real-world autonomy possible.
Keep Reading
The upgraded CodeFusion Studio 2.0 simplifies how developers design, test and deploy AI on embedded systems.
Updated
January 8, 2026 6:34 PM

Illustration of CodeFusion Studio™ 2.0 showing AI, code and chip icons. PHOTO: ANALOG DEVICES, INC.
Analog Devices (ADI), a global semiconductor company, launched CodeFusion Studio™ 2.0 on November 3, 2025. The new version of its open-source development platform is designed to make it easier and faster for developers to build AI-powered embedded systems that run on ADI’s processors and microcontrollers.
“The next era of embedded intelligence requires removing friction from AI development”, said Rob Oshana, Senior Vice President of the Software and Digital Platforms group at ADI. “CodeFusion Studio 2.0 transforms the developer experience by unifying fragmented AI workflows into a seamless process, empowering developers to leverage the full potential of ADI's cutting-edge products with ease so they can focus on innovating and accelerating time to market”.
The upgraded platform introduces new tools for hardware abstraction, AI integration and automation. These help developers move more easily from early design to deployment.
CodeFusion Studio 2.0 enables complete AI workflows, allowing teams to use their own models and deploy them on everything from low-power edge devices to advanced digital signal processors (DSPs).
Built on Microsoft Visual Studio Code, the new CodeFusion Studio offers built-in checks for model compatibility, along with performance testing and optimization tools that help reduce development time. Building on these capabilities, a new modular framework based on Zephyr OS lets developers test and monitor how AI and machine learning models perform in real time. This gives clearer insight into how each part of a model behaves during operation and helps fine-tune performance across different hardware setups.
Additionally, the CodeFusion Studio System Planner has also been redesigned to handle more device types and complex, multi-core applications. With new built-in diagnostic and debugging features — like integrated memory analysis and visual error tracking — developers can now troubleshoot problems faster and keep their systems running more efficiently.
This launch marks a deeper pivot for ADI. Long known for high-precision analog chips and converters, the company is expanding its edge-AI and software capabilities to enable what it calls Physical Intelligence — systems that can perceive, reason, and act locally.
“Companies that deliver physically aware AI solutions are poised to transform industries and create new, industry-leading opportunities. That's why we're creating an ecosystem that enables developers to optimize, deploy and evaluate AI models seamlessly on ADI hardware, even without physical access to a board”, said Paul Golding, Vice President of Edge AI and Robotics at ADI. “CodeFusion Studio 2.0 is just one step we're taking to deliver Physical Intelligence to our customers, ultimately enabling them to create systems that perceive, reason and act locally, all within the constraints of real-world physics”.