Fujitsu Launches Platform for Managing and Operating Generative AI Systems
KAWASAKI, Japan, Jan. 27, 2026 — Fujitsu has announced the launch of a dedicated AI platform…
Artificial Intelligence News
AI world news.
KAWASAKI, Japan, Jan. 27, 2026 — Fujitsu has announced the launch of a dedicated AI platform…
Jan. 26, 2026 — When materials are compressed, their atoms are forced into unusual arrangements that…
Jan. 26, 2026 — AI weather and climate prediction is more accessible than ever for…
Jan. 23, 2026 — Harnessing the power of artificial intelligence to study plant microbiomes —…
For a mid-tier IT firm, Coforge’s performance is unusually strong, especially in a quarter that’s often relatively lukewarm. In Q3, the company outdid itself, all thanks to AI.
“Almost 100% of the wins and new contracts being awarded to us are AI-infused,” CEO and executive director Sudhir Singh said during the earnings call. Not AI pilots, not strategy decks, but live projects tied to delivery and business outcomes.
Coforge, he said, is “bringing the promise of AI to fruition, instead of just talking about agnostic.” It was a pointed remark in a season where everyone talked about AI-led revenue.
The numbers gave Singh room to be confident. Coforge grew 4.4% sequentially in constant currency terms, with revenue for the quarter ended December 2025 totalling ₹4,188 crore ($478 million). Order intake hit $593 million, helped by six large deals in what is usually a weak quarter.
The real anchor was the pipeline. The executable order book for the next twelve months rose to $1.72 billion, more than 30% higher than a year ago. Year-to-date dollar revenue growth now stands at 32.8%, an outlier in a market where most large firms are still struggling to cross 3–4%.
Margins slipped, but not in a way that changed the tone. Free cash flow came in at 110%, far ahead of guidance. Net profit told a messier story. Profit fell 33% sequentially to ₹250 crore ($27.3 million), hit by a one-time ₹118 crore ($12.9 million) impact from labour code changes, but that is across the board for all firms.
In the earnings call, Singh talked about how two years ago, boards were asking how the company would adopt AI. Today, he said, that question is gone. Clients now want proof of impact, measurable gains, and operational change. “The age of AI experimentation is over,” he said. What matters now is whether firms can modernise foundations and deliver AI as part of one integrated transformation.
That framing explains how Coforge sees its edge in the business. The firm is not chasing client counts; it has about 600 clients and wants growth from depth, not breadth. Around 95% percent of revenue comes from repeat business. The top five clients grew 51% year-to-date. The top ten grew 47%.
When it comes to hiring, rose by its headcount by 445 to 35,341. Attrition eased to 10.9%, among the lowest in the industry.
When it comes to hiring AI-ready fresh graduates, Singh said that, unlike engineers from earlier generations, new freshers have never really relied on certifications to prove their value.
“They come from an ecosystem in colleges globally, which is very hackathon-centric. They are used to being put into situations where they have to find a solution. Given how powerful AI is as a technology, that is the number one skill that you look for, and given the fact that the new cohort of engineers fresh from college has that in abundance, we have definitely not changed the talent catchment that we go after,” Singh explained.
The delivery examples were tightly chosen. At a European global bank, Coforge deployed autonomous agents across data silos to rework cash flow forecasting. At a global airline, it rebuilt the software life cycle using AI-led engineering. At a US financial services firm, its ForgeX platform, an integrated engineering and delivery service launched in November last year, has more than 20 domain-specific agents to govern automation.
The acquisition of Encora sits at the centre of that strategy. The quarter’s numbers do not include it yet. Once closed, Singh said, the combined firm will have a $2.5 billion revenue base across data, cloud, and AI engineering. Coforge has decided not to retire Encora’s $500 million debt through a QIP, opting instead for bank financing at mid-single-digit rates.
Across Indian IT, AI strategies are now splitting into camps.
TCS reported scale, with $1.8 billion in annualised AI revenue, though it still forms a small share of total business. HCLTech reported precision, with $146 million in advanced AI revenue, narrowly defined. Meanwhile, Infosys refused to publish a number, choosing instead to talk about 4,600 projects, 500 agents in production, and 28 million lines of AI-generated code.
Coforge has chosen a fourth path. It did not publish an AI revenue line at all. Instead, Singh said almost every new deal is already AI-infused. AI is not a business unit for Coforge; it is the delivery model.
The post Coforge Now Infuses AI in 100% Projects, Refuses to Reveal AI Revenue appeared first on Analytics India Magazine.
Adobe has unveiled Firefly Foundry, a platform for commercially safe AI models that are tuned to a company or IP owner’s unique, proprietary brand or franchise content. Those omni-models can generate high-fidelity images, video, audio, 3D and vector outputs with a complete understanding of a brand or franchise’s creative universe.
According to the company, Firefly Foundry helps the media and entertainment industry move faster while preserving artistry, authorship and ownership. It aims to empower studios and creatives to enhance storytelling by rapidly generating engaging short-form social content, enabling broader audience appeal through added characters and story arcs.
“Integrating Firefly Foundry into our workflow builds on that legacy by giving our artists the freedom to push ideas further, while giving co-production, client, and distribution partners confidence in how generative AI is being used,” Jamie Byrne, co-founder, president and COO of Promise Advanced Imagination, announced.
The platform supports brands in creating immersive experiences beyond the screen, using digital displays and mobile apps to bring narratives to life in venues like theme parks. Directors and storyboard artists can benefit from advanced tools that facilitate idea development and accurately capture their vision during pre-production.
On set, filmmakers can fine-tune their creative choices in real-time, ensuring effective shot lists while efficiently processing dailies. In post-production, Firefly Foundry optimises workflows for editors and visual effects artists, enabling them to enhance scenes and finalise frames without costly reshoots.
Adobe is actively forging partnerships with key industry leaders who are poised to embrace this pivotal moment in creative innovation. Their collaboration includes renowned talent agencies such as Creative Artists Agency, United Talent Agency, and William Morris Endeavor, as well as hybrid and AI-native film studios like B5 Studios and Promise Advanced Imagination.
Moreover, Adobe is teaming up with esteemed design and visual effects studios like Cantina Creative, alongside visionary directors such as David Ayer, recognised for his work on Fast and Furious and Suicide Squad, and Jaume Collet-Serra, known for Black Adam and Jungle Cruise.
Bryan Lourd, CEO and co-chairman of Creative Artists Agency, noted, “Adobe is a… company that recognises the importance of protecting creators’ rights and intellectual property and is committed to building a responsible AI ecosystem.”
The post Adobe Launches Firefly Foundry to Safeguard IP Rights for Creative Artists appeared first on Analytics India Magazine.
Google Photos is stepping up its use of generative AI with the introduction of Me Meme, a new experimental feature rolling out on Android and iOS in the US that lets users turn themselves into memes. The concept is straightforward: Me Meme allows users to “turn themselves into a meme” with AI.
To begin with, users can either choose a meme template from Google Photos’ built-in presets or upload a reference image of their own. They then select a selfie or portrait in which their face is clearly visible. Google recommends using a well-lit, front-facing photo that is sharp and in focus for the best results. Before generating the meme, users can make small adjustments to the source image.
Once the meme is created, it can be saved to the photo library, regenerated, or shared directly. Google also provides tools to compare the original image with the AI-generated version, as well as an option to submit feedback on the output.
Unlike the standalone Gemini app, which already allows users to generate images and memes through text prompts, Google Photos is positioning Me Meme as a more guided and accessible experience. By embedding the feature directly within the Create tab, Google appears to be targeting casual users who may be reluctant to experiment with prompts or switch between apps.
The name ‘Me Meme’ itself seems deliberately designed for virality, reflecting Google Photos’ broader effort to reframe the Create tab as a hub for playful, shareable AI tools. Me Meme now sits alongside features such as Create with AI, Photo to video, Remix, Collage, Highlight video, Cinematic photo, and Animation.
For now, Me Meme is labelled as an experimental feature and is rolling out gradually. It was not visible on all devices tested, and Google has yet to share details on when it may expand beyond the US or move out of the testing phase.
Google Photos was launched in May 2015 as a standalone spin-off from Google+ Photos (which itself was the successor to Picasa). In 2021, Google ended the free unlimited storage policy, moving to a shared 15 GB storage limit across Google Drive, Gmail, and Photos. Last year, a significant integration with AI, including features like Magic Editor, Ask Photos (Gemini-powered search), was initiated.
The post Google Photos Tests ‘Me Meme’ Feature to Turn Selfies Into AI memes appeared first on Analytics India Magazine.
GitHub has introduced the GitHub Copilot SDK in technical preview, allowing developers to embed Copilot’s agentic capabilities directly into their own applications.
The SDK exposes the same execution loop used by GitHub Copilot CLI, including planning, tool invocation, file editing, and command execution. According to GitHub, this is intended to reduce the complexity of building agent-based systems from scratch.
“Building agentic workflows from scratch is hard,” said the chief product officer, Mario Rodriguez, in a blog post. “Even before you reach your actual product logic, you’ve already built a small platform.”
GitHub said the Copilot SDK provides programmatic access to Copilot’s production-tested agent loop, removing the need for developers to design their own planners and runtimes. The SDK supports multiple AI models, custom tool definitions, MCP server integration, GitHub authentication, and real-time streaming.
The technical preview initially supports Node.js, Python, Go, and .NET. Developers can use an existing GitHub Copilot subscription or supply their own API key. The open repository includes setup instructions, starter examples, and SDK references for each language.
GitHub recommends starting with a single task, such as updating files or running commands, and allowing Copilot to plan and execute steps while the host application provides tools and constraints. In an example shared by GitHub, it was revealed that developers can create a Copilot client, start a session using a specified model, and send prompts programmatically.
The company said the SDK builds directly on the capabilities of Copilot CLI, which already allows users to plan projects, modify files, run commands, and delegate tasks without leaving the terminal. Recent updates to Copilot CLI include persistent memory, multi-step workflows, full MCP support, and asynchronous task delegation.
“The SDK takes the agentic power of Copilot CLI and makes it available in your favourite programming language,” Rodriguez wrote. “This makes it possible to integrate Copilot into any environment.”
Internal GitHub teams have used the SDK to build tools such as YouTube chapter generators, summarisation tools, custom agent interfaces, and speech-to-command workflows, according to the company.
GitHub positioned the Copilot SDK as an execution layer, with GitHub managing authentication, model access, and session handling, while developers control how those components are used within their applications.
The post GitHub Introduces Copilot SDK to Embed AI Agents in Applications appeared first on Analytics India Magazine.
Jan. 23, 2026 — Harnessing the power of artificial intelligence to study plant microbiomes —…
The Maharashtra government and Supervity AI have signed a landmark MoU at the World Economic Forum Annual Meeting 2026 in Davos to establish the world’s first AI global capability centre (GCC) Hub in Mumbai.
The hub will function as a next-generation applied agentic AI R&D and innovation centre, enabling global enterprises to transition from traditional offshore GCC models to AI-first operations powered by autonomous, policy-driven multi-agent AI systems.
The hub will be anchored in Mumbai’s central business district, the Bandra Kurla Complex (BKC), and will operate as a launchpad for multinational enterprises to design, test, deploy, and scale multi-agent AI employees across finance, procurement, compliance, supply chain, customer operations, and other core business functions.
Unlike conventional GCCs that depend on human-intensive execution, Supervity AI’s GCC hub will focus on self-driving AI employees, operating under human-defined policies, governance frameworks, and enterprise-grade auditability.
Under the MoU, Supervity AI and the state government will jointly establish a dedicated agentic AI R&D centre under the hub.
This will enable enterprises to safely deploy AI-driven operating models across front, middle, and back-office functions, develop AI-first operating frameworks aligned with global regulatory and compliance standards, and build a robust ecosystem of AI talent, solution partners, and enterprise adopters to support large-scale, responsible adoption of AI-first enterprise operations.
As part of the broader initiative, the state will support talent enablement and institutional partnerships, including the structured training of up to 25,000 forward-deployed AI engineers over time.
Supervity AI also plans to establish four industry-focused AI GCC spoke centres across key nodes in Maharashtra, leveraging tier-2 city talent through a distributed hub-and-spoke model.
The collaboration will also explore the progressive adoption of AI-led operating models across 48 Maharashtra government departments, contributing to the creation of a state-level AI GCC framework supported by technology partners.
Commenting on the partnership, Siva Moduga, co-founder and CEO, Supervity AI, said, “Enterprise operations are undergoing a structural shift. Traditional GBS and GCC models were designed for scale through human effort, whereas the next decade demands scale through AI execution with strong governance.”
The post Mumbai to Host World’s First AI GCC Hub as Maharashtra & Supervity AI Sign Landmark MoU at Davos appeared first on Analytics India Magazine.