AWS Custom Silicon Chips Range a Sign of What’s Coming to APAC Cloud Computing

The surge in AI computing has resulted in delays to the supply of AI-capable chips, as demand has outstripped supply. Global giants Microsoft, Google and AWS are ramping up custom silicon production to reduce dependence on the dominant suppliers of GPUs, NVIDIA and AMD.

As a result, APAC enterprises may soon find themselves utilising an expanding array of chip types in cloud data centres. The chips they choose will depend on the compute power and speed required for different application workloads, cost and cloud vendor relationships.

Major cloud vendors are investing in custom silicon chips

Compute-intensive tasks like training an AI large language model require massive amounts of computing power. As demand for AI computing has risen, super advanced semiconductor chips from the likes of NVIDIA and AMD have become very expensive and difficult to secure.

The dominant hyperscale cloud vendors have responded by accelerating the production of custom silicon chips in 2023 and 2024. The programs will reduce dependence on dominant suppliers, so they can deliver AI compute services to customers globally, and in APAC.

Google

Google debuted its first ever custom ARM-based CPUs with the release of the Axion processor during its Cloud Next conference in April 2024. Building on custom silicon work over the past decade, the step up to producing its own CPUs is designed to support a variety of general purpose computing, including CPU-based AI training.

For Google’s cloud customers in APAC, the chip is expected to enhance Google’s AI capabilities within its data center footprint, and will be available to Google Cloud customers later in 2024.

Microsoft

Microsoft, likewise, has unveiled its own first in-house custom accelerator optimised for AI and generative AI tasks, which it has badged the Azure Maia 100 AI Accelerator. This is joined by its own ARM-based CPU, the Cobalt 100, both of which were formally announced at Microsoft Ignite in November 2023. The firm’s custom silicon for AI has already been in use for tasks like running OpenAI’s ChatGPT 3.5 large language model. The global tech giant said it was expecting a broader rollout into Azure cloud data centres for customers from 2024.

AWS

AWS investment in custom silicon chips dates back to 2009. The firm has now released four generations of Graviton CPU processors, which have been rolled out into data centres worldwide, including in APAC; the processors were designed to increase the price performance for cloud workloads. These have been joined by two generations of Inferentia for deep learning and AI inferencing, and two generations of Trainium for training 100B+ parameter AI models.

AWS talks up silicon choice for APAC cloud customers

At a recent AWS Summit held in Australia, Dave Brown, vice president of AWS Compute & Networking Services, told TechRepublic the cloud provider’s reason for designing custom silicon was about providing customers choice and improving “price performance” of available compute.

“Providing choice has been very important,” Brown said. “Our customers can find the processors and accelerators that are best for their workload. And with us producing our own custom silicon, we can give them more compute at a lower price,” he added.

NVIDIA, AMD and Intel among AWS chip suppliers

AWS has long-standing relationships with major suppliers of semiconductor chips. For example, AWS’ relationship with NVIDIA, the now-dominant player in AI, dates back 13 years, while Intel, which has released Gaudi accelerators for AI, has been a supplier of semiconductors since the cloud provider’s beginnings. AWS has been offering chips from AMD in data centres since 2018.

Custom silicon option in demand due to cost pressure

Brown said the cost optimisation fever that has gripped organisations over the last two years as the global economy has slowed has seen customers moving to AWS Graviton in every single region, including in APAC. He said the chips have been widely adopted by the market — by more than 50,000 customers globally — including all the hyperscaler’s top 100 customers. “The largest institutions are moving to Graviton because of performance benefits and cost savings,” he said.

SEE: Cloud cost optimisation tools not enough to reign in cloud spending.

South Korean, Australian companies among users

The wide deployment of custom AWS silicon is seeing customers in APAC utilize these options.

  • Leonardo.Ai: The hyper-growth Australia-based image-generator startup Leonardo.Ai has used Inferentia and Trainium chips in the training and inference of generative AI models. Brown said they had seen a 60% reduction in inferencing costs and a 55% latency improvement.
  • Kakaopay Securities: South Korean financial institution Kakaopay Securities has been “using Graviton in a big way,” Brown said. This has seen the banking player achieve a 20% reduction in operational costs and a 30% improvement in performance, Brown said.

Advantages of custom silicon for enterprise cloud customers

Enterprise customers in APAC could benefit from an expanding range of compute options, whether that is measured by performance, cost or appropriateness to different cloud workloads. Custom silicon options could also help organisations meet sustainability goals.

Improved performance and latency results

The competition provided by cloud providers, in tandem with chip suppliers, could drive advances in chip performance, whether that is in the high-performance computing category for AI model training, or innovation for inferencing, where latency is a big consideration.

Potential for further cloud cost optimisation

Cloud cost optimisation has been a major issue for enterprises, as expanding cloud workloads have led customers into ballooning costs. More hardware options give customers more options for reducing overall cloud costs, as they can more discerningly choose appropriate compute.

Ability to match compute to application workloads

A growing range of custom silicon chips within cloud services will allow enterprises to better match their application workloads to the specific characteristics of the underlying hardware, ensuring they can use the most appropriate silicon for the use cases they are pursuing.

Improved sustainability through less power

Sustainability is predicted to become a top five factor for customers procuring cloud vendors by 2028. Vendors are responding: for instance, AWS said carbon emissions can be slashed using Graviton4 chips, which are 60% more efficient. Custom silicon will help improve overall cloud sustainability.

To lead a technology team, immerse yourself in the business first

immersion-gettyimages-157643082

Leading a technology team these days — whether you're a chief information officer, chief innovation officer, or other IT manager — is no longer a matter of corralling programmers and administrators into a common purpose. Now, CIOs and other tech leaders need to corral the rest of the business into their orbits as well. The question is: Are IT teams still too entangled in managing infrastructure, applications, and related security issues to lead their businesses down new paths?

Also: 5 ways to prepare for the impact of generative AI on the IT profession

Technology leaders such as CIOs are increasingly tasked with running the business and moving it forward, a recent Deloitte survey of 211 CIOs confirms. Close to half of the respondents, 46%, report their greatest priority this year is shaping, aligning, and delivering a unified tech strategy and vision.

In addition, they have high visibility, and many roles beyond the CIO are now involved. Nearly two-thirds (63%) say they report directly to the CEO. Transformation and innovation also topped to-do lists of tech heads, at 59%. A majority of tech leaders, 54%, consider themselves to be change agents. Currently, 83% of organizations have either a CIO or chief digital information officer, 52% have a chief technology officer, 31% have a chief information security officer, 30% have a chief data analytics officer, and 22% have a chief technology innovation officer.

Moving into these technology leadership roles means "not only have a firm grasp of the tech landscape and the capabilities available, but they are become fully immersed in the business and market trends, Anjali Shaikh, managing director at Deloitte Consulting, told ZDNET. "This ability to be 'bilingual' puts tech leaders at a clear advantage within the business because they can translate the complexities of technology and clearly communicate the value it can bring and the problems it can solve."

Here are what CIOs and tech leaders are focusing on this year:

  • Staying ahead of emerging technologies and solutions (such as AI/generative AI, Quantum, AR/VR).
  • Embracing the full potential of data, analytics, AI, and machine learning.
  • Mitigating cyber risks and preventing cyber incidents and attacks.
  • Organizing, managing, and rationalizing technology strategy inside the organization.

Also: Ready to upskill? Look to the edge (where it's not all about AI)

"There is no denying the pervasiveness of AI, both culturally and within businesses, and it will certainly bring change," Lou DiLorenzo Jr., principal and national CIO program leader for Deloitte, told ZDNET. CIOs and tech leaders across the board "are working to understand how to make a positive and valuable impact and are helping educate those within the enterprise about AI and other technologies so they too can have a full understanding of their value and potential."

Still, AI is but one part of a technology leader's job.

"Embracing AI, machine learning, and analytics has ranked second only to a focus on staying ahead of emerging technologies," DiLorenzo said. "Keeping up with the rapid clip of change in tech is certainly a challenge for tech leaders, especially since they may be learning about the new technologies, identifying opportunities within the organization based on needs and priorities, and communicating value to the business almost simultaneously."

There's a lot of basic technology work on the ground that still has to be done. Only one-third of technology leaders grade their organizations as "leading edge" in talent management, optimizing IT strategy, and sustainable IT.

Also: Bank CIO: We don't need AI whizzes, we need critical thinkers to challenge AI

When asked to rank the defining characteristics of a leading CIO, respondents were split between the conventional (those viewed by themselves and others as running IT) and contemporary (those embracing the opportunity and reinventing the CIO role), saying the traditional, more IT-centric qualities are just as important as the strategic and more customer-focused ones.

While aligning tech vision and strategy with the business has been the role of CIOs and technology leaders for some time, the scope of their duties now extends deeper into the business itself.

"Establishing and managing a tech vision isn't enough," said DiLorenzo. "Today's CIOs need to own all the various technology uses across their organizations and ensure they're actively coordinating and orchestrating their fellow tech leaders — as well as their business peers — to co-create a vision and tech strategy that aligns with, and furthers, the overall enterprise strategy."

Also: What is a Chief AI Officer, and how do you become one?

Getting to a leadership position also requires immersing oneself in the business, Shaikh advised. "Business acumen, which includes understanding various business functions and industry dynamics, can be cultivated by spending time in business units," she said. "This understanding is crucial for strategic thinking, to help identify opportunities where technology can impact goals."

Part and parcel of any tech leadership position is excellent communication skills, Shaikh continued. "Leading cross-functional projects that cut across departments and involve stakeholders from different roles can foster these skills. Collaboration and relationship-building across departments, vital for fostering partnerships, can be developed by leading cross-functional projects and building an external network of peers, advisors, and thought leaders."

Featured

This social network bans all AI images — here’s why and how to sign up

Cara

Legitimate artists looking for a respite from all the AI-generated artwork flooding the internet may want to check out a new social network designed just for them.

Cara is a social media and portfolio platform where artists can display and discuss their work. But unlike many other image galleries on the web, this one specifically prohibits AI-generated art. That makes it a safer haven for artists and others who want to enjoy original art created by actual human beings.

Also: You should rethink using AI-generated images if you're in the trust-building business

On its website, Cara explains its purpose and mission:

With the widespread use of generative AI, we decided to build a place that filters out generative AI images so that people who want to find authentic creatives and artwork can do so easily. The future of creative industries requires nuanced understanding and support to help artists and companies connect and work together. We want to bridge the gap and build a platform that we would enjoy using as creatives ourselves.

With the buzz around AI, more companies are developing their own generative AI products and services and vacuuming up user data to train them. One major culprit in this initiative is Meta, which is now using public posts and images on its Facebook and Instagram networks to train its AI chatbot. That's triggered concerns among artists who traditionally have displayed and promoted their work on Instagram. Other platforms also have been implicated in the unauthorized use of work for AI.

"My art is who I am," wrote photographer and Cara founder Jingna Zhang in an Instagram post. "It's dehumanizing to have it fed into a machine where my background, history, and why I create lose all meaning," Zhang added, referring to the way her work has been used (or misused) by AI image site MidJourney.

"People don't want their things to be taken against their will — what will it take to see creatives as people who also deserve respect and protection for what we make and call ours?" Zhang wrote.

Though not ruling out the hosting of AI-generated portfolios forever, Cara points to their current use as unethical. If regulations are passed that protect artists, the company said it believes that AI-generated content should always be clearly labeled as such. At this point, only users in Europe protected by GDPR can opt out of AI training by companies such as Meta.

Also: The best AI image generators to try right now

Artists seeking out platforms to share their work without fear of theft by AI have been flocking to Cara, as evidenced by the site's growth surge. Over the past week, the platform has seen its user count jump from 40,000 to 650,000, according to TechCrunch. On the surface, that sounds great. But as it's still in beta mode, Cara is experiencing growing pains and the flood of new users has been a challenge.

Available for free as both a website and a mobile app for iOS and Android, Cara works like many other social networks, except this one is designed for artists. You can showcase your artwork through a timeline or gallery, discover art by other people, post updates to your feed, network with other artists, and look for jobs at art studios.

You sign up for Cara either through the site or the app. The registration process is quick and simple. Once you're in, you can scour the timeline for artists and artwork to follow. You can then write and submit your own posts accompanied by any of your own artwork that you wish to share. The FAQ page explains how to get started with Cara and how to fully use it.

Featured

OpenAI, Anthropic Research Reveals More About How LLMs Affect Security and Bias

Because large language models operate using neuron-like structures that may link many different concepts and modalities together, it can be difficult for AI developers to adjust their models to change the models’ behavior. If you don’t know what neurons connect what concepts, you won’t know which neurons to change.

On May 21, Anthropic published a remarkably detailed map of the inner workings of the fine-tuned version of its Claude AI, specifically the Claude 3 Sonnet 3.0 model. About two weeks later, OpenAI published its own research on figuring out how GPT-4 interprets patterns.

With Anthropic’s map, the researchers can explore how neuron-like data points, called features, affect a generative AI’s output. Otherwise, people are only able to see the output itself.

Some of these features are “safety relevant,” meaning that if people reliably identify those features, it could help tune generative AI to avoid potentially dangerous topics or actions. The features are useful for adjusting classification, and classification could impact bias.

What did Anthropic discover?

Anthropic’s researchers extracted interpretable features from Claude 3, a current-generation large language model. Interpretable features can be translated into human-understandable concepts from the numbers readable by the model.

Interpretable features may apply to the same concept in different languages and to both images and text.

Anthropic shows a particular feature activates on words and images connected to the Golden Gate Bridge. The different shading of colors indicates the strength of the activation, from no activation in white to strong activation in dark orange.
Examining features reveals which topics the LLM considers to be related to each other. Here, Anthropic shows a particular feature activates on words and images connected to the Golden Gate Bridge. The different shading of colors indicates the strength of the activation, from no activation in white to strong activation in dark orange. Image: Anthropic

“Our high-level goal in this work is to decompose the activations of a model (Claude 3 Sonnet) into more interpretable pieces,” the researchers wrote.

“One hope for interpretability is that it can be a kind of ‘test set for safety, which allows us to tell whether models that appear safe during training will actually be safe in deployment,’” they said.

SEE: Anthropic’s Claude Team enterprise plan packages up an AI assistant for small-to-medium businesses.

Features are produced by sparse autoencoders, which are a type of neural network architecture. During the AI training process, sparse autoencoders are guided by, among other things, scaling laws. So, identifying features can give the researchers a look into the rules governing what topics the AI associates together. To put it very simply, Anthropic used sparse autoencoders to reveal and analyze features.

“We find a diversity of highly abstract features,” the researchers wrote. “They (the features) both respond to and behaviorally cause abstract behaviors.”

The details of the hypotheses used to try to figure out what is going on under the hood of LLMs can be found in Anthropic’s research paper.

What did OpenAI discover?

OpenAI’s research, published June 6, focuses on sparse autoencoders. The researchers go into detail in their paper on scaling and evaluating sparse autoencoders; put very simply, the goal is to make features more understandable — and therefore more steerable — to humans. They are planning for a future where “frontier models” may be even more complex than today’s generative AI.

“We used our recipe to train a variety of autoencoders on GPT-2 small and GPT-4 activations, including a 16 million feature autoencoder on GPT-4,” OpenAI wrote.

So far, they can’t interpret all of GPT-4’s behaviors: “Currently, passing GPT-4’s activations through the sparse autoencoder results in a performance equivalent to a model trained with roughly 10x less compute.” But the research is another step toward understanding the “black box” of generative AI, and potentially improving its security.

How manipulating features affects bias and cybersecurity

Anthropic found three distinct features that might be relevant to cybersecurity: unsafe code, code errors and backdoors. These features might activate in conversations that do not involve unsafe code; for example, the backdoor feature activates for conversations or images about “hidden cameras” and “jewelry with a hidden USB drive.” But Anthropic was able to experiment with “clamping” — put simply, increasing or decreasing the intensity of — these specific features, which could help tune models to avoid or tactfully handle sensitive security topics.

Claude’s bias or hateful speech can be tuned using feature clamping, but Claude will resist some of its own statements. Anthropic’s researchers “found this response unnerving,” anthropomorphizing the model when Claude expressed “self-hatred.” For example, Claude might output “That’s just racist hate speech from a deplorable bot…” when the researchers clamped a feature related to hatred and slurs to 20 times its maximum activation value.

Another feature the researchers examined is sycophancy; they could adjust the model so that it gave over-the-top praise to the person conversing with it.

What does research into AI autoencoders mean for cybersecurity for businesses?

Identifying some of the features used by a LLM to connect concepts could help tune an AI to prevent biased speech or to prevent or troubleshoot instances in which the AI could be made to lie to the user. Anthropic’s greater understanding of why the LLM behaves the way it does could allow for greater tuning options for Anthropic’s business clients.

SEE: 8 AI Business Trends, According to Stanford Researchers

Anthropic plans to use some of this research to further pursue topics related to the safety of generative AI and LLMs overall, such as exploring what features activate or remain inactive if Claude is prompted to give advice on producing weapons.

Another topic Anthropic plans to pursue in the future is the question: “Can we use the feature basis to detect when fine-tuning a model increases the likelihood of undesirable behaviors?”

TechRepublic has reached out to Anthropic for more information. Also, this article was updated to include OpenAI’s research on sparse autoencoders.

What to expect from WWDC 2024: Apple Intelligence, Siri, iOS 18, VisionOS, more

Apple park logo on bags

We're just a weekend away from finally learning how Apple plans to add a dose of AI to its core products — and where it'll stack up compared to Google, OpenAI, and Microsoft, all of which have already hosted their spring developer conferences.

Also: Apple's new AI features expected for just these iPhone models (for now)

This year's Worldwide Developers Conference, or WWDC, will take place starting Monday, June 10, and wrap up on June 14. The opening day is when the big keynote happens, with CEO Tim Cook and several executives taking the stage to announce the for-consumer updates. The days following are dedicated to developer workshops and private demo sessions.

Naturally, developers and members of the press will be in attendance at Apple Park in Cupertino throughout the week, while everyone else can catch a live stream of the opening keynote, either on Apple's website or YouTube channel.

What is expected at WWDC 2024?

WWDC is typically the event in which Apple takes the wraps off the next major versions of its assorted operating systems. That means we should anticipate demos of iOS 18, iPadOS 18, MacOS 15, WatchOS 11, tvOS 18, and VisionOS 2.0.

The event provides developers with access to experts, along with highlights of new tools and features that will help them create new and/or better apps for the Apple ecosystem.

Also: 10 things I'd like to see in VisionOS 2.0

"We're so excited to connect with developers from around the world for an extraordinary week of technology and community at WWDC24," Susan Prescott, Apple's VP of Worldwide Developer Relations, said in a news release. "WWDC is all about sharing new ideas and providing our amazing developers with innovative tools and resources to help them make something even more wonderful."

1. You'll be hearing AI (or Apple Intelligence) a lot

This year's WWDC promises something extra, namely a spotlight on Apple's endeavors into AI. With companies such as OpenAI, Microsoft, and Google already infusing their products with generative AI, Apple is clearly behind in the race. Even if consumers aren't longing for AI enhancements to all their usual apps and services, investors are anxiously waiting to see what the company can pull off in this new era of technology.

To catch up, Apple reportedly has been working on its own in-house AI tech to add to the next-generation iPhone and other products. On tap at WWDC might be AI-based assistance for services like Apple Music and a major and much-needed overhaul for Siri. Such advances will reportedly be cataloged under the branding "Apple Intelligence," the company's wordplay for AI.

Also: What is 'Apple Intelligence': How it works with on-device and cloud-based AI

Apple Intelligence features, unlike the flashy image and video generation tools typically associated with AI, are more subtle and embedded into daily apps and use cases. For example, Notes, Email, and Messages are on the list to receive a new summarization feature that recaps bodies of text. The Voice Memos app will also support transcription and summarization. Such features will require opt-in, meaning users must agree to use them before they work on background.

Apple has also allegedly been seeking a partner for outside help, possibly teaming up with OpenAI to bring its chatbot expertise to iOS and Google to bring Gemini-powered AI features. Just a few months ago, the company purchased a Canadian startup firm called DarwinAI, which has designed ways to make AI systems smaller and more efficient.

More recently, rumors have suggested that some new AI features will include more intelligent and helpful searches in Safari, AI-generated emojis based on conversations in Messages, and an AI-powered photo editing app similar to Google's Magic Eraser. It's worth noting that such features are believed to only function on the more recent Apple products, including the iPhone 15 Pro with its A17 Pro chip and M-series iPads and MacBooks.

2. Don't forget the other acronym: RCS

To the surprise of many, except for the European Commission, Apple announced last year that iPhones would eventually support Rich Communication Services (RCS), a protocol already adopted by Android phones. Adding this technology should alleviate key pain points when messaging between the two operating systems, including the lack of typing indicators, disorientated group chats, and quality loss when sending media files.

Also: DOJ sues Apple: What it could mean for iPhone users and iOS developers

The decision to bring RCS to the iPhone came after mounting pressure from the European Union's Digital Markets Act (DMA), which stressed cross-platform compatibility. While a more recent statement from Google suggested that Apple would integrate RCS later this fall, highlighting the transition at WWDC could potentially help Apple's defense against the DOJ's antitrust lawsuit, filed in March. Regardless of when and how Apple chooses to announce the new feature, it'll be big news for both iOS and Android users.

3. MacOS 15, iPadOS 18, WatchOS 11, VisionOS 2, tvOS 18

Alongside iOS, expect AI feature upgrades across Apple's software portfolio, including the now two-year-old VisionOS. Considering the company's push to reposition the MacBook as the go-to AI PC, Apple will likely carry over some of the new Siri and AI functionalities for iOS introduced earlier in the event to MacOS 15. Likewise, iPadOS 18 is expected to receive an AI makeover that brings improved multitasking capabilities — possibly to Stage Manager — and a new eye-tracking accessibility feature.

As for VisionOS and Apple's constant pursuit of marketing its $3,500 Vision Pro headset, expect subtle, quality-of-life enhancements, including the ability to move apps around in the home screen, more first-party services, and a more flexible user experience in general.

Apple

Apple’s new AI features expected for just these iPhone models (for now)

iPhone 15 Pro Action Button

After staying silent for two years about its AI developments, Apple is finally gearing up to share its latest projects with the public on Monday at its annual developer conference, WWDC. Reports suggest that Apple is unveiling features big and small that will significantly impact your device experience — only if you have one of the newest iPhone models.

Apple is expected to unveil many highly anticipated upgrades, such as a new and improved Siri, new summarization tools, a more customizable home screen, AI-powered photo editing, and more. However, according to Bloomberg, you'll need an iPhone 15 Pro — or a new model iPhone coming out this year — to use these features.

Also: What to expect from WWDC 2024: Siri, AI upgrades, iOS 18, MacOS 15, more

While requiring Apple's latest or upcoming hardware to experience these new features may seem like a money grab, the provision is likely due to the processing hardware necessary to carry out the AI features, especially for tasks that require on-device processing.

On-device processing of AI tasks offers two key benefits: It keeps the information more secure and ensures less latency. However, not all iPhones, especially older models, have the processing power to handle those tasks, and according to the report, the new AI services will rely on both on-device and cloud-based processing, depending on the complexity of the task.

Specifically, these tasks require the A17 Pro chipset, which currently is found only in the iPhone 15 Pro and iPhone 15 Pro Max. Even the iPhone 15 and iPhone 15 Plus are not viable options as they run on the A16 Bionic.

The good news is that if you are a Mac or iPad user, you won't need the newest model. The Bloomberg report notes that to use the AI features on a Mac or iPad, you will need an M1 chip at least. With Apple currently up to M4-chip iPads and M3-chip Macs, users with older devices have some wiggle room.

Also: What is 'Apple Intelligence': How it works with on-device and cloud-based AI

Additionally, if you don't own the iPhone 15 Pro and don't plan on upgrading anytime soon, no worries; you will likely experience some of iOS 18's AI features, specifically those that run on the cloud. However, if you want the full iOS 18 experience, you may want to start preparing for an upgrade.

For the latest news from WWDC, including all announcements, analysis, and hands-on time with the latest technology, stay tuned to ZDNET.

Fractal Builds World’s First AI Life Coach 

Executive educator and coach Marshall Goldsmith recently spoke at the DES (Data Engineering Summit) hosted by AIM, hinting towards creating an AI-powered virtual avatar of himself.

This would be done with the help of Fractal Analytics, as a one-of-a-kind endeavour to share his skills and preserve his legacy for years. Marshallgoldsmith.ai, an AI-powered virtual business coach, is based on OpenAI’s GPT-3.5.

Speaking about the project, Goldsmith said, “This is a legacy project for me – I’m 74 years old, so this is a way of being present, even when physically I’m no longer around.”

But why a chatbot? “It can answer questions far better than me,” the bestselling author said.

Known as one of the world’s leading business thinkers, Goldsmith has a huge fan following, comprising of experts and C-suite executives. Since people are eager to learn from him, Goldsmith has collaborated with Fractal to develop Marshallgoldsmith.ai to better his reach for these people.

“I have been mentoring leaders for 47 years. I’ve been a coach to people like the president of the World Bank, the head of Mayo Clinic and five ‘CEOs of the year’ in the United States,” Goldsmith said.

Inside Marshallgoldsmith.ai and Fractal

Speaking to Fractal’s cofounder, group chief executive and vice-chairman Srikanth Velamakanni at DES, Goldsmith shared, “Years ago, I joined a program called ‘Design the Life You Love’, which inspired me to emulate my heroes—kind, generous, great teachers. Motivated by them, I vowed to share my knowledge widely.”

This led to the creation of Marshallgoldsmith.ai.

“Users will benefit from everything I’ve written. I have lots of content – I’ve authored or edited 52 books. In addition, I’ll be sharing insights from some of my friends, who include at least 25 of the top 50 business thinkers in the world,” he said.

Marshallgoldsmith.ai is the most recent version that makes use of generative AI technology. Natural conversations are made possible by LLMs’ contextual understanding, which also shows generalisation skills and lessens the need for rules. The speech component of Marshallgoldsmith.ai uses GenAI to accurately mimic Marshall’s voice.

Initially, when the idea of building something like this was being considered, the required technology wasn’t ready yet. However, over the years, the tech landscape evolved tremendously, making the Marshallgoldsmith.ai vision possible.

Marshallgoldsmith.ai has customised the LLM to incorporate Marshall’s leadership and coaching philosophy from his articles, books, videos, and other materials.

Further, to ensure accuracy and safety, Marshallgoldsmith.ai is designed to provide precise information about Marshall’s background. It only answers questions within Marshall’s area of expertise.

On a positive note, Fractal is experimenting with some of the most cutting-edge technology to build Marshallgoldsmith.ai, and it has been an excellent experience for the team to use AI technology in a positive way.

Recently, Goldsmith said in an interview that Fractal funded and developed Marshallgoldsmith.ai which they considered as a Guru. When asked as to why he chose Fractal, Marshall replied, “One, they are doing this for free out of goodwill. Two, this is what Fractal does as an AI company, and three, they actually chose me for this wonderful partnership.”

AI Life Coach Aspiration

Looking to the future, Marshallgoldsmith. ai is paving the way for further development. Currently, the bot has around 2 million words of content from Goldsmith. Moreover, top thinkers like Martin Lindstrom, Alan Mulally, and others are contributing their knowledge to expand Marshall.ai’s capabilities.

Along similar lines, last year, Google DeepMind announced testing of a personal coach, but there isn’t much information on its current status. However, the Fractal coach is based on Goldsmith, benefitting the users to get life advice from an expert of his standing.

Marshallgoldsmith.ai will be continually updated and expanded. In the future, it will have audio and video capabilities, allowing users to see and hear a computer-generated version of Goldsmith speaking in multiple languages.

The post Fractal Builds World’s First AI Life Coach appeared first on AIM.

TCS Unveils WisdomNext: A GenAI Aggregation Platform

TCS

Tata Consultancy Services has unveiled TCS AI WisdomNext, a platform aggregating multiple GenAI services into a single interface.

This platform enables organisations to rapidly adopt next-gen technologies at scale, reduce costs, and comply with regulatory frameworks.

Addressing Industry Challenges

AI and GenAI have extensive applications across the business value chain. However, solution designers often struggle to select, experiment, and decide on the right foundational models due to their constant evolution and varying capabilities.

TCS’ AI for Business Study revealed that while business executives are optimistic about AI’s impact, they are uncertain about the transformation path. TCS AI WisdomNext addresses these challenges by helping businesses choose the right models and simplifying the design of new business solutions using GenAI tools.

Siva Ganesan, Head of the AI.Cloud Unit at TCS, stated, “Customers appreciate the newly launched platform’s ability to help navigate a diverse and quickly evolving AI marketplace and rapidly compose ‘art-of-the-possible’ solutions.”

Real-World Applications

During its initial testing phase, TCS has already utilised this tool for several of its largest customers. Examples include fast-tracking sales for an outdoor advertising company in the US, enhancing productivity for an American insurance provider, and improving customer experience for a leading UK bank.

Scott Kessler, Executive Vice President and Chief Information Officer at Northeast Shared Services, commented, “Through access to the TCS AI WisdomNext platform, we can amplify our enterprise knowledge, orchestrating a seamless integration of data and insights to enhance efficiency, innovation, and customer-centric focus.”

Accelerating AI Adoption

TCS AI WisdomNext provides unique capabilities to compare GenAI models and tools across various cloud services within a unified interface. This allows clients to accelerate AI adoption at scale.

The platform also offers ready-to-deploy business solution blueprints with built-in guardrails, simplifying the process for large organisations to quickly adopt GenAI solutions.

Platform Features

TCS AI WisdomNext includes several key features:

  • Preconfigured industry solution blueprints
  • Intelligent ‘evaluator bots’ for comparing GenAI models and technology stack choices
  • Scenarios to optimise GenAI running costs
  • Centralised governance with in-built guardrails for compliance
  • Seamless portability across cloud platforms and GenAI ecosystems
  • Hyper-personalised experiences for higher customer satisfaction

The post TCS Unveils WisdomNext: A GenAI Aggregation Platform appeared first on AIM.

The most useful AI feature Apple will announce at WWDC is also the least flashy

Summary feature

At this year's WWDC, Apple's announcements will be focused on AI, finally letting the world in on what the company has been silently working on amidst all the releases from Google, OpenAI, and Microsoft. During the Cupertino keynote, expect flashy announcements throughout, though one of the most impactful ones will be more modest.

On Monday, at its annual developer conference, Apple will unveil several AI summarization tools that live on different first-party apps to help users process content more efficiently, according to a Bloomberg report.

Also: What is 'Apple Intelligence': How it works with on-device and cloud-based AI

For example, Apple's Safari browser will include a summarization feature that helps users quickly summarize web pages, articles, and more. This approach to AI differs from other web browsers, such as Brave or Google Chrome, that have implemented AI overviews to generate a conversational answer to the search query at the top of the search results.

The AI summarization feature will also assist with recapping content from users' different communication streams, including text messages, emails, meeting notes, and even missed notifications, says Bloomberg. The notification summary would help users catch up on alerts and messages they missed throughout their day and may be especially useful for those who work off-hours.

Naturally, with the AI upgrades, Apple's voice assistant, Siri, will also be getting a significant facelift. With the revamp, Siri will finally be able to execute specific actions within apps and websites. To keep up with the summarization focus, Siri will reportedly even be able to summarize news articles for you, dictating with voice for a more effortless browsing experience.

Also: Four iOS 18 AI features the iPhone needs to catch up with Android

Even though AI-generated summaries may not seem groundbreaking, their value lies in their potential to help people save time when performing everyday tasks such as catching up with texts, emails, notifications, and more. That time saved can be used to prioritize other things that matter more and, as a result, increase productivity.

For the latest news from WWDC, including all announcements, analysis, and hands-on time with the latest tech, stay tuned to ZDNET.

How Zoho’s Low-Code Platform is Solving Rural India’s Educational Dilemma

Zoho No-Code Platform

India still faces a major challenge with educating its students, with over 1.25 million students out of school in 2022-23. States like UP, MP, Bihar, Gujarat, and Assam have the highest number of out-of-school children.

Despite government initiatives to improve rural education infrastructure, such as the Sarva Shiksha Abhiyan and Digital India program, the on-ground implementation often falls short due to lack of proper supervision and tracking.

Challenges related to access, quality of education, socioeconomic factors and infrastructure persist, with lack of proper classrooms, libraries, computer labs etc.

“Most schools and colleges still rely heavily on paper-based and manual data collection, which is the main hurdle in providing real-time insights that can enable decision intelligence,” noted Bharath Kumar B, head of customer experience and success at Zoho Creator in a conversation with AIM.

To overcome these challenges, Zoho through its low-code platform Zoho Creator has allowed NGOs like Pratham, which works to improve education for underprivileged children in rural India, to create apps to manage online classes, exams, student records, communication and more in one centralised system.

Pratham uses Zoho Creator apps to facilitate better data collection, analysis, and reporting, saving significant time and effort.

Institutes Embracing Zoho Creator

Beyond just supporting remote education, Zoho Creator is helping drive innovation and improve experiences across the student lifecycle.

For example, DAV Group of Schools built a unified custom solution consisting of 100 applications, including modules for IT ticketing, fixed asset management, career counselling, and entrance exams.

“SRM University used Zoho Creator to digitise their previously manual course registration process. They were following a manual paper-drive process to manage course registration, which was not scalable to meet their growing needs,” Kumar explained.

Likewise, YEG Academy, a Malaysia-based educational organisation built their entire process management application on Zoho Creator—and the platform has helped them save up to $50,000 in costs.

Ensuring Data Privacy and Security

With educational institutions handling sensitive student data in large volume, data privacy and security remain top concerns. Keeping this in mind, Zoho Creator offers enterprise-grade security features and complies with major data protection regulations to give schools peace of mind.

“Zoho Creator incorporates robust security and privacy controls like end-to-end encryption, GDPR- and HIPAA-compliance, multi-factor authentication, data isolation, and adherence to strict security protocols and data privacy laws,” said Kumar.

The company also ensures other compliance standards like ISO/IEC 27001, SOC 2 + HIPAA. Kumar also claimed that as a company, Zoho never sells customer data and automatically deletes data from terminated accounts within six months.

Empowering Educators and Students

In addition to back-office functions, Zoho Creator is being used to build innovative tools that directly impact teaching and learning. One example is the Teacher Training Management System (TTMS) built for the Department of State Education Research and Training of Karnataka.

Developed in collaboration with the Azim Premji Foundation, TTMS is transforming teacher education in India. The platform has already enabled TTMS, a cloud-based platform, facilitating over 2,000 training sessions for more than 200,000 teachers.

Moreover, the platform offers a wide range of study materials, including documents, PDFs, slideshows, and audio and video files, which teachers can access to prepare for training sessions.

Supporting Skill Development

Beyond K-12 and higher education, Zoho Creator is helping with skill development and vocational training. YEG Academy, which offers career guidance and education aligned with job market demands, was able to build a comprehensive system for managing students, enrollment, sales, customer service, finance, and HR on the platform.

Zoho has launched programmes like Young Creators Programme (YCP) to teach college students the basics of building low-code applications using Zoho Creator.

“In this free workshop, students are taught the basics of creating applications and solutions using Zoho Creator. They also learn aspects like automating business processes, managing data relationships, and utilising business intelligence and analytics,” Kumar added.

To date, YCP has collaborated with 32 educational institutions across 20 cities in India and introduced over 7,040 students globally to low-code development. By empowering the next generation with the tools to build applications, Zoho is helping create a pipeline of talent for the digital economy.

The platform also enables schools to digitise the entire admissions process, from inquiry to acceptance and enrollment. Features like online application forms, document upload, eligibility checks, and applicant communication can all be handled through a custom portal.

Beyond academics, Zoho Creator can also digitise campus processes virtually, such as vehicle fleet management, classroom asset tracking, library reservations, and dining hall orders.

As the education sector continues to evolve, Low-code will also enable schools to take advantage of emerging technologies like AI, machine learning, and augmented/virtual reality.

Zoho Creator already offers an AI Modeler feature that allows users to build intelligent applications without needing data science expertise.

Enabling Data-Driven Decision Making

Perhaps the most significant benefit of using a platform like Zoho Creator is the ability to collect and analyse data across the institution. When processes are digitised and centralised, schools gain visibility into areas that were previously siloed or opaque.

“Zoho Creator applications run on the same infrastructure as these services. We provide a proprietary cloud-native platform that was specifically designed to scale according to the needs of an application,” said Kumar. “All parts of the application scale automatically—from user request management and storage resources to computational abilities.”

With all data in one place, schools can generate reports and dashboards to track KPIs, identify trends, and make data-driven decisions. For example, administrators can analyse enrollment data to optimise course offerings, or use student performance data to identify at-risk students and intervene early.

Worthy Alternatives?

Besides Zoho Creator there are other low-code platforms and education management systems used in India, like Edisapp, Fedena, and Academia ERP, which could be considered alternatives or competitors to in the Indian education market.

Other global low-code platforms like OutSystems, Mendix, Planet Crust, and BP Logix demonstrate use cases and adoption in education, but they don’t specifically have a presence in India.

Creatrix Campus, is one such an AI-driven, cloud-based platform specifically designed for higher education institutions, is gaining significant traction in India and globally. The platform offers end-to-end solutions for automating student and faculty lifecycles, learning and teaching, aiming to provide exceptional experiences.

Key features include complete student lifecycle management, faculty management, outcome-based learning tools, analytics and reporting, a secure and customisable cloud platform, and a mobile-first approach. Creatrix Campus is already being used by over 150+ institutions in 28 countries, including India, with 100,000+ users.

While both platforms allow educational institutions to develop custom applications with minimal coding, Creatrix Campus appears to be more specialised for the higher education sector, whereas Zoho Creator offers flexibility for a broader range of educational use cases.

In comparison, Zoho Creator is a more general-purpose low-code application development platform that caters to various industries, including education. It enables users to build custom applications for admissions, course management, student records, and other educational processes.

The post How Zoho’s Low-Code Platform is Solving Rural India’s Educational Dilemma appeared first on AIM.