The IT services firm strengthens its collaboration with Google Cloud to help enterprises move AI from pilot projects to production systems
Updated
February 18, 2026 8:11 PM

Google Cloud building. PHOTO: ADOBE STOCK
Enterprise interest in AI has moved quickly from experimentation to execution. Many organizations have tested generative tools, but turning those tools into systems that can run inside daily operations remains a separate challenge. Cognizant, an IT services firm, is expanding its partnership with Google Cloud to help enterprises move from AI pilots to fully deployed, production-ready systems.
Cognizant and Google Cloud are deepening their collaboration around Google’s Gemini Enterprise and Google Workspace. Cognizant is deploying these tools across its own workforce first, using them to support internal productivity and collaboration. The idea is simple: test and refine the systems internally, then package similar capabilities for clients.
The focus of the partnership is what Cognizant calls “agentic AI.” In practical terms, this refers to AI systems that can plan, act and complete tasks with limited human input. Instead of generating isolated outputs, these systems are designed to fit into business workflows and carry out structured tasks.
To make that workable at scale, Cognizant is building delivery infrastructure around the technology. The company is setting up a dedicated Gemini Enterprise Center of Excellence and formalizing an Agent Development Lifecycle. This framework covers the full process, from early design and blueprinting to validation and production rollout. The aim is to give enterprises a clearer path from the AI concept to a deployed system.
Cognizant also plans to introduce a bundled productivity offering that combines Gemini Enterprise with Google Workspace. The targeted use cases are operational rather than experimental. These include collaborative content creation, supplier communications and other workflow-heavy processes that can be standardized and automated.
Beyond productivity tools, Cognizant is integrating Gemini into its broader service platforms. Through Cognizant Ignition, enabled by Gemini, the company supports early-stage discovery and prototyping while helping clients strengthen their data foundations. Its Agent Foundry platform provides pre-configured and no-code capabilities for specific use cases such as AI-powered contact centers and intelligent order management. These tools are designed to reduce the amount of custom development required for each deployment.
Scaling is another element of the strategy. Cognizant, a multi-year Google Cloud Data Partner of the Year award winner, says it will rely on a global network of Gemini-trained specialists to deliver these systems. The company is also expanding work tied to Google Distributed Cloud and showcasing capabilities through its Google Experience Zones and Gen AI Studios.
For Google Cloud, the partnership reinforces its enterprise AI ecosystem. Cloud providers can offer models and infrastructure, but enterprise adoption often depends on service partners that can integrate tools into existing systems and manage ongoing operations. By aligning closely with Cognizant, Google strengthens its ability to move Gemini from platform capability to production deployment.
The announcement does not introduce a new AI model. Instead, it reflects a shift in emphasis. The core question is no longer whether AI tools exist, but how they are implemented, governed and scaled across large organizations. Cognizant’s expanded role suggests that execution frameworks, internal deployment and structured delivery models are becoming central to how enterprises approach AI.
In that sense, the partnership is less about new technology and more about operational maturity. It highlights how AI is moving from isolated pilots to managed systems embedded in business processes — a transition that will likely define the next phase of enterprise adoption.
Keep Reading
The quiet infrastructure shift powering the next generation of data centers
Updated
February 12, 2026 1:21 PM

Peripheral Component Interconnect Express (PCIe) port on a motherboard, coloured yellow. PHOTO: UNSPLASH
Modern data centers operate on a simple yet fundamental principle: computers require the ability to share data extremely quickly. As AI and cloud systems grow, servers are no longer confined to a single rack. They are spread across many racks, sometimes across entire rooms. When that happens, moving data quickly and cleanly becomes harder.
Montage Technology, a Shanghai-based semiconductor company, builds the chips and connection systems that help servers exchange data without delays. This week, the company announced a new Active Electrical Cable (AEC) solution based on PCIe 6.x and CXL 3.x — two important standards used to connect CPUs, GPUs, network cards and storage inside modern data centers.
In simple terms, Montage’s new AEC product helps different parts of a data center “talk” to each other faster and more reliably, even when those parts are physically far apart.
As data centers grow to support AI and cloud workloads, their architecture is changing. Instead of everything sitting inside one rack, systems now stretch across multiple racks and even multiple rows. This creates a new problem: the longer the distance between machines, the harder it is to keep data signals clean and fast.
This is where Active Electrical Cables come in. Unlike regular copper cables, AECs include small electronic components inside the cable itself. These components strengthen and clean up the data signal as it travels, so information can move farther without getting distorted or delayed.
Montage’s solution uses its own retimer chip based on PCIe 6.x and CXL 3.x. A “retimer” refreshes the data signal so it arrives accurately at the other end. This allows servers, GPUs, storage devices and network cards to stay tightly connected even across longer distances inside large data centers.
The company also uses high-density cable designs and built-in monitoring tools so operators can track performance and fix issues faster. That makes large data centers easier to deploy and maintain.
According to Montage, the solution has already passed interoperability tests with CPUs, xPUs, PCIe switches and network cards. It has also been jointly developed with cable manufacturers in China and validated at the system level.
What makes this development important is not just speed. It is about scale. AI models, cloud services and real-time applications demand massive amounts of data to move continuously between machines. If that movement slows down, everything else slows with it.
By improving how machines connect across racks, Montage’s AEC solution supports the kind of infrastructure that next-generation AI and cloud systems depend on.
Looking ahead, the company plans to expand its high-speed interconnect products further, including work on PCIe 7.0 and Ethernet retimer technologies.
Quietly, in the background of every AI system and cloud service, there is a network of cables and chips doing the hard work of moving data. Montage’s latest launch focuses on making that hidden layer faster, cleaner and ready for the scale that modern computing now demands.