AWS Trains Over 5.5 Mn in AI and Cloud Skills in India 

Since 2017, AWS, the cloud subsidiary of Amazon, has trained over 5.5 million people in India on AI and cloud skills and over over 8.3 million across the APAC and Japan region.

With generative AI moving from proof of concepts to deployment this year, the company plans to train about two million individuals in AI globally by 2025 through its “AI-Ready” initiative.

Intuit WDSW June 2024

Register Here

“We want to democratise and simplify generative AI so that customers can innovate and scale through AWS services. Keeping responsible AI as our goal, we work backwards from customer needs and provide what they require,” Guru Bala, head of solutions architecture, AWS specialised services, told AIM at AWS Summit Bengaluru, last month.

Through a mix of free and paid courses on platforms like Coursera, individuals can enhance their skills in generative AI. Competency partners like Shellkode also play a crucial role in developing various generative AI services to meet the diverse needs of customers.

“Our approach to responsible AI involves defining, measuring, and mitigating risks at every stage, from model training to deployment,” explained Bala. “We want to make sure that AI technology is used ethically and responsibly.”

Productivity Enhancement with AWS AI

A recent EY report shows that generative AI could potentially boost India’s GDP by an estimated $359 to $438 billion by 2030.

Organisations across India are recognising substantial gains in productivity and improved customer experiences by integrating generative AI into their products and solutions.

Bala noted, “As a result, boardrooms here (India) are increasingly asking, ‘What is the business impact in terms of productivity, customer experience, or time to market?’”

For example, Amazon Q’s coding companion Amazon CodeWhisperer enables developers to work more productively by providing AI-powered code suggestions in real-time, across 15 programming languages. In some cases, developers were 27% more likely to complete tasks successfully and did so an average of 57% faster, helping them build whatever fuels their passion more easily.

Bala emphasised the importance of understanding business problems and leveraging data as the biggest differentiator.

“One thing that companies should really understand is their business problems. The data that they have got is their biggest differentiator, and they should learn how to apply generative AI on their data to solve business problems,” he said.

Amazon Bedrock Now in India

At the summit, AWS announced that Amazon Bedrock, a fully managed generative AI service, is now generally available in the AWS Asia Pacific (Mumbai) Region.

Bedrock provides the choice of top-notch models from both Amazon’s own first-party offerings (like Amazon Titan, and Amazon Q) and various third-party models. This includes families of foundational models from AI-focused companies like Meta, AI21 Labs, Anthropic, Cohere, AI21, Mistral, Stability AI, and more.

Bala explained that the launch in India enhances generative AI capabilities by improving processing and response times due to its proximity to users. It also brings more flexibility and choice of models without getting bogged down by the complexities of deployment and inference.

Initially launched globally in select regions in 2023, Bedrock’s expansion to Mumbai enables customers, including those in the public sector and regulated industries, to innovate with generative AI while maintaining control over application execution and data storage.

“Plus, we have several customers in regulated industries, such as life insurance, who require services to be available within India for regulatory compliance. By making these services available in the India region, we ensure that these customers can leverage Bedrock while adhering to local regulations,” he added.

However, adopting generative AI comes with several challenges. AWS approached these by first helping businesses define their objectives.

Instead of offering generative AI as a one-size-fits-all solution, it has worked backwards from customer needs, determining whether traditional AI or generative AI would best solve their problems.

Future in India

According to Bala, India’s teams play a crucial role in AWS’s global operations. The expertise of Indian engineers and developers is integral to services like Amazon Bedrock, driving the company’s innovation, both in the country and globally.

Looking ahead, AWS’s vision is clear and focused. It aims to continuously innovate in generative AI, introducing new models and variants to make the technology more accessible and easier to use.

The post AWS Trains Over 5.5 Mn in AI and Cloud Skills in India appeared first on AIM.

Anthropic Unveils Strategies For Testing And Mitigating Elections-Related Risks

Generative artificial intelligence (GenAI) has emerged as a transformative force in various sectors, including finance, IT, and healthcare. While the benefits of GenAI are undeniable, its application in the realm of elections poses significant risks and challenges. This includes the threat of spreading misinformation through deep AI fakes and creating highly personalized political advertisements for microtargeting and manipulation.

The AI models are only as good as the data they are trained on, and if data contains bias, it can have an unintended impact on the democratic process.

Anthropic, one of the leading AI safety and research companies, has shared the work it has done since last summer to test its AI models for election-related risks. The company has developed in-depth expert testing (“Policy Vulnerability Testing”) and large-scale automated evaluations to identify and mitigate potential risks.

The PVT method is designed to evaluate Anthropic AI model responses to election-related queries. It does this by rigorously testing the models for two potential issues. The first issue is where the model gives outdated, inaccurate, or harmful information in response to well-intended questions. The other issue is when the models are used in ways that violate the Anthropic user policy.

As part of the PVT, Anthropic focuses on selected areas and potential misuse applications, and with the assistance of subject matter experts, Anthropic constructs and tests various types of prompts to monitor how the AI model responds.

For this testing, Anthropic has partnered with some of the leading researchers and experts in this field including Isabelle Frances-Wright, Director of Technology and Society at the Institute for Strategic Dialogue.

The outputs from the PVT are documented and compared with Anthropic usage policy and industry benchmarks using similar models. The results are reviewed with the partners to identify gaps in policies and safety systems and to determine the best solutions for mitigating the risks. As an iterative testing method, PVT is expected to only get better with each round of testing.

Anthropic shared a case study in which it used the PVT method to test its models for accuracy based on questions about the election administration in South Africa. The method was successful in identifying 10 remediations to mitigate the risk of providing incorrect, outdated, or inappropriate information in response to elections-related queries. The remediations included “increasing the length of model responses to provide appropriate context and nuance for sensitive questions” and “not providing personal opinions on controversial political topics”.

Anthropic admits that while PVT offers invaluable qualitative insights, it is time-consuming and resource-intensive, making it challenging to scale. This limits the breadth of issues and behavior that can be tested effectively. To overcome these challenges, Anthropic also included automated evaluations for testing AI behavior across a broader range of scenarios.

Complimenting PVT with automated evaluations enables assessment of model performance across a more comprehensive range of scenarios. It also allows for a more consistent process and set of questions across models.

Anthropic used automated testing to review random samples of questions related to EU election administration and found that 89% of the model-generated questions were relevant extensions to the PVT results.

Combining PVT and automated evaluations forms the core of Anthropic’s risk mitigation strategies. The insights generated by these methods enabled Anthropic to refine its policies, fine-tune its models, update Claude’s system prompt, and enhance automated enforcement tools.

Additionally, Anthropic models were enhanced to now automatically detect and redirect election-related queries to authoritative sources. This includes time-sensitive questions about elections that the AI models might not be capable of answering.

(Lightspring/Shutterstock)

After the implementation of changes highlighted by PVT and automated testing, Anthropic used the same testing protocols to measure whether its interventions were successful.

The testing re-run revealed a 47.2% improvement in referencing the model’s knowledge cutoff date, which is one of Anthropic's top priority mitigations. According to Anthropic, the fine-tuning of its models led to a 10.4% improvement in how often users were redirected or referenced to an authoritative source for the appropriate question.

While it may be impossible to completely mitigate the threats posed by AI technology to the election cycle, Anthropic has made significant strides in responsible AI use. Anthropic’s multifaceted approach to testing and mitigating AI risks has ensured that the potential misuse of its AI models during elections is minimized.

Related Items

Anthropic Breaks Open the Black Box

Amazon Invests Another $2.75 Billion Into Anthropic

Anthropic Launches Tool Use, Making It Easier To Create Custom AI Assistants

Why are Apple’s AI features not coming to lower-end iPhones? Here’s my guess as an IT expert

iPhone 15 and iPhone 15 Plus on table

During WWDC 2024, Apple unveiled "Apple Intelligence," which incorporates advanced AI capabilities throughout its ecosystem. However, these features are only available on high-end devices such as the iPhone 15 Pro, iPad Pro with M-series chips, and Macs running on Apple Silicon.

Also: Apple staged the AI comeback we've been hoping for — but here's where it still needs work

Why didn't Apple roll these features out to the entry-level iPhone 15 and earlier models? Although there may be other reasons why the company chose not to do so, the decision is almost certainly influenced by the substantial costs and infrastructure challenges involved in large-scale AI implementation.

The cost of GPU processing

Advanced AI features require substantial computational power, typically provided by high-performance GPUs. For instance, NVIDIA's MGX with GH 200 and Grace Hopper superchip designed for AI training, inference, 5G, and HPC cost around $65,000 each. Deploying these servers regionally to support lower-end devices would be prohibitively expensive. Apple would easily need thousands of these units to support its entire user base, resulting in astronomical costs likely passed on to consumers through service fees.

Also: Apple partners with OpenAI to bring ChatGPT to iOS, iPadOS, and MacOS

Even major AI service providers such as OpenAI, Microsoft, and Google encounter challenges in offering dependable and quick access to LLM and Generative AI models to the general public without downtime and overcommitting resources. The shortage and cost of GPU-enabled servers make these issues worse. To maintain the rapid response times expected by its customers, Apple will need to invest substantially in servers, data centers, and edge infrastructure — an infrastructure level it likely does not currently possess.

Apple's approach to Private Cloud Compute (PCC)

For the initial rollout of Apple Intelligence, the company has chosen a hybrid approach to balance cost and performance, combining on-device processing with Private Cloud Compute (PCC). On-device processing utilizes the A17 Pro chip in the iPhone 15 Pro line and the M-series chips in iPads and Macs to enhance security and privacy. For more demanding tasks, PCC allows cloud operations while maintaining user privacy. PCC is designed with custom Apple silicon and a robust operating system to ensure personal data security and prevent unauthorized access.

Also: Here's how Apple's keeping your cloud-processed AI data safe (and why it matters)

Apple is currently focused on rolling out its Generative AI services to high-end devices as part of the initial phase of Apple Intelligence deployment. This allows Apple to enhance its AI capabilities and infrastructure before expanding to a wider range of devices. To bring Apple Intelligence to the rest of its ecosystem, the company will likely deploy AI-accelerated server appliances at the edge, enabling less capable devices to benefit from advanced AI features. However, this infrastructure is not yet ready for large-scale deployment, as Apple's shift towards AI development is still recent.

The challenges of edge computing

Edge computing, which involves processing data closer to where it is generated rather than relying solely on centralized data centers, could significantly enhance performance and reduce latency. However, deploying edge computing infrastructure is complex and costly, requiring robust hardware and software solutions to ensure seamless integration and security. Apple is known for its meticulous approach to hardware and software development, and the company is likely still testing and refining its edge computing solutions before rolling them out at scale.

Also: Make room for RAG: How Gen AI's balance of power is shifting

While NVIDIA is a major player in the GPU server space, others include traditional x86 Intel-based and Arm-based server providers like Qualcomm and Ampere. These servers can also use NVIDIA GPUs, but Apple likely wants to control the integration with its operating system and silicon to deploy AI computing. Additionally, the supply chain from NVIDIA or any other HPC server vendor is likely insufficient to meet Apple's large-scale deployment requirements.

As reported by The Register, Apple is developing its own AI servers, which are expected to be more cost-effective and better integrated with its ecosystem. These servers are currently being tested in data centers for foundation model use, and a broader rollout is anticipated in 2025. This phased approach ensures Apple can maintain high privacy, security, and user experience standards while gradually expanding its AI capabilities across its device lineup.

Broader implications for IoT and other devices

Apple's decision to limit Apple Intelligence to high-end models is driven by the significant cost and infrastructure challenges associated with deploying AI at scale, allowing the company to ensure a smooth and secure user experience while laying the groundwork for future expansions.

The need for AI-accelerated servers isn't just about older phones and lower-end devices. Apple's IoT products, like the Apple Watch, Apple TV, and HomePod, which lack the computational power for on-device AI, would also benefit from such infrastructure. These devices will unlikely handle on-device AI computation shortly, making cloud and edge solutions even more critical.

Also: Here's every iPhone model that will support Apple's new AI features (for now)

As Apple introduces Apple Intelligence, users with older or non-Pro models may feel left out. Clear communication from Apple regarding the phased rollout strategy and plans for broader deployment will be important in managing user expectations.

As Apple continues developing its AI infrastructure, including potential edge computing solutions, we expect that a broader rollout of Apple Intelligence will be deployed in the coming years. This phased approach ensures that Apple can maintain its high privacy, security, and user experience standards while gradually expanding its AI capabilities across its device lineup.

Apple

Meta AI Unveils Husky, a Unified, Open-Source Language Agent

meta husky language agent

Researchers at Meta, Allen Institute for AI and Washington University have proposed a new open-source language agent designed for complex, multi-step reasoning tasks, christened Husky.

Unlike existing models that focus on specific domains, the researchers claim that Husky operates over a unified action space. This means that it can handle diverse challenges such as numerical, tabular, and knowledge-based reasoning, as opposed to specialised agents that can focus on specific challenges like agents for coding.

Husky iterates between generating actions to solve tasks and executing these actions using expert models, constantly updating its solution state. This iterative has proven a key point of distinction, allowing Husky to outperform previous agents across 14 datasets used for evaluation.

Read the full paper here.

Focus on Mixed-Tool Reasoning

One of Husky’s key innovations is its capability to manage mixed-tool reasoning. It excels in tasks that require retrieving missing knowledge and performing numerical calculations, achieving performance on par with, or exceeding, state-of-the-art models like GPT-4.

The researchers have also introduced HuskyQA, an evaluation set specifically designed to stress test language agents on mixed-tool reasoning tasks, particularly to perform numerical reasoning and retrieve missing knowledge.

Language agents perform complex tasks by using tools to execute each step precisely. However, most existing agents are based on proprietary models or designed to target specific tasks, such as mathematics or multi-hop question answering.

“Our experiments show that Husky outperforms prior language agents across 14 evaluation datasets,” the researchers stated.

While it is true that AI agents have gained significant popularity over the last couple of years, the introduction of an agent capable of reasoning over a number of complex tasks means that agent capabilities are quickly expanding.

The post Meta AI Unveils Husky, a Unified, Open-Source Language Agent appeared first on AIM.

Key trends in intelligent automation: From AI-augmented to cognitive

Key trends in intelligent automation: From AI-augmented to cognitive

Introduction

Intelligent automation is advancing rapidly by integrating AI augmentation, autonomy, autonomic, and cognitive capabilities into automation systems. Each capability represents a different level of sophistication in how Artificial Intelligence (AI) interacts with human activity and the surrounding environment. Intelligent automation evolved from basic rule-based systems to incorporate sophisticated machine-learning algorithms. The first capability discussed in this article, AI-augmented automation, augments automation systems through a ‘partnership model’ between humans and AI, where humans and AI work together to improve the performance of automation systems. Moving beyond augmentation, autonomous capabilities allow systems to operate independently and adapt to new situations. Further advancement comes with autonomic capabilities, representing sophisticated forms of automation where systems are capable of self-management and dynamic adaptation without external intervention. Finally, cognitive automation enhances this landscape by incorporating advanced cognitive abilities into automation systems.

This article uses illustrative examples to clarify AI’s functionalities and role within each type of these capabilities, establishing a foundation for understanding them. It paves the way for further exploration of this continuously evolving landscape and its transformative impact on the future. The scope of this article covers intelligent automation systems that automate processes, decisions, tasks, and actions across various domains, such as business, IT, and industrial automation.

Basic Concepts and Definitions

Figure 1 depicts a Taxonomy of Systems that provides context for the domain of intelligent automation. It categorizes automation systems broadly as computerized systems and then narrows down to specific types of automation systems. It distinguishes between traditional and intelligent automation systems within the automation systems’ category. Then, it classifies intelligent automation systems, the focus of this article, into four categories: AI-augmented, autonomous, autonomic, and cognitive. These categories represent different levels of sophistication in how AI interacts with human activity and the surrounding environment.

It is worth noting that the boundaries between these categories can be conceptually blurry. This reflects the ongoing development of intelligent automation and the continuous advancement of these systems. For example, certain AI-augmented systems may exhibit autonomous characteristics under specific circumstances. Similarly, some autonomous systems may integrate AI functionalities that edge them towards autonomic or cognitive behaviours.

Furthermore, the practical application of these categories in real-world systems often leads to a blending of capabilities. Take self-driving cars as an example. They display autonomous features, such as independent navigation, and augmented ones, like providing driver assistance in specific scenarios. This illustrates how real-world systems can embody characteristics from various categories, further highlighting the fluidity of the boundaries in intelligent automation.

Figure 1- Taxonomy of Systems Providing Context for Intelligent Automation

Key trends in intelligent automation: From AI-augmented to cognitive

Definitions of Taxonomy Nodes:

Computerized Systems: Systems that utilize computing technology to function. This extends beyond dedicated computer systems to include systems where computers play a crucial role in their functionality and operations.

Automation Systems: In the context of this article, automation systems refer to physical or software Computerized Systems that automate processes, decisions, tasks, and actions across various domains, including business, IT, and industrial automation. These systems incorporate computing technologies like hardware, software, and firmware. They may also incorporate other technologies, like sensors and actuators, which are crucial for certain automation functions.

Traditional Automation Systems: Automation Systems that do not utilize artificial intelligence. They have fixed deterministic behavior.

Intelligent Automation Systems: Automation Systems that leverage AI to automate processes, decisions, tasks, and actions.

Augmented Automation Systems: Automation Systems that leverage AI and human intelligence to automate processes, decisions, tasks, and actions. However, they often require human oversight or intervention in certain situations.

Autonomous Systems: Automation Systems that leverage AI to operate independently, and adapt to new situations. They function algorithmically without requiring human intervention.

Autonomic Systems: Automation Systems that leverage AI to operate independently. Moreover, they are self-managing, capable of analyzing their environment and optimizing their behavior to adapt to changing conditions. These systems would dynamically modify their algorithms independently, without the need for external software updates.

Cognitive Automation Systems: Automation Systems that employ AI capabilities, such as natural language processing and machine learning, to automate tasks that traditionally require human cognitive skills. These systems enable machines to perceive, learn, reason, and make decisions, mimicking human intelligence.

Other Computerized Systems: These are systems that aid human activities, excluding automation. They perform functions such as data processing and information management.

Examples of Different Categories of Automation Systems

Building on the concepts introduced in Section 2, this section leverages illustrative examples to showcase the key features of intelligent automation systems, the focus of this article. It further details specific AI techniques that could be employed within each system and explains their roles.

It should be noted that the techniques listed here are illustrative examples of potential AI techniques. The techniques chosen and how they are implemented can vary depending on the system’s design goals, data characteristics, and computational resources. Potential limitations and issues associated with these techniques should be carefully considered when selecting and implementing AI for real-world applications.

Traditional Automation Systems

Basic Supervisory Control and Data Acquisition (SCADA) system for Car Wash Tunnels: This system automates the operation of a car wash tunnel by controlling the motion of washing equipment along a track or conveyor system while vehicles remain stationary. It follows a set sequence of steps and speeds, ensuring consistent and efficient cleaning while minimizing the need for human intervention.

This system relies on pre-programmed instructions to automate repetitive predefined tasks. It does not utilize AI and has fixed deterministic behaviors.

Intelligent Automation Systems

AI-Augmented Automation System

Data Fabric Platform: This system utilizes AI and other technologies to streamline data fabric design and implementation by enabling AI-augmented data management, integration, and sharing across diverse data sources, addressing different aspects of an applications’ data needs.

The data fabric platform described in this example utilizes AI techniques to assist and augment human data management tasks. While AI can automate specific data management, integration, and sharing tasks, human intervention remains essential in several situations. This characteristic emphasizes the AI-augmentation nature of this system, where AI augments human capabilities without taking over the entire process.

Potential AI Techniques

K-Means Clustering: K-Means clustering groups similar data points, facilitating data analysis and pattern recognition.

Isolation Forest: Isolation Forest detects anomalies in data, ensuring data quality and integrity.

Random Forest: Random Forest is used for classification and regression tasks, enhancing predictive analytics within the data fabric.

Regression, Collaborative Filtering: These techniques are used for predictive modelling and recommendation systems, improving data-driven decision-making.

Graph Neural Networks (GNNs): GNNs are used for analysing and learning from graph-structured data, enhancing the understanding of complex relationships in the data.

Transformer Models (e.g., GPT or BERT): Transformer models are used for natural language processing tasks, improving the ability to understand and process textual data.

Knowledge Graphs & Active Metadata (foundation for AI-ready data management): Improve data provisioning by providing context and facilitating data discovery (Data Management Techniques).

Autonomous System

Autonomous Vehicle Cybernetic System (AVCS): This system is the intelligent core of an autonomous vehicle. It integrates software modules utilizing AI and machine learning for tasks like perception (interpreting sensor data), planning safe trajectories, and making real-time decisions. Hardware components within the AVCS, including sensors, processors, and actuators, work with the software to enable the vehicle to perceive its environment, plan its route, and execute manoeuvres for self-driving operation.

This AVCS leverages AI algorithms to process real-time sensor data (cameras, radar, LiDAR, ultrasonic sensors, GPS) for environmental perception. That enables the vehicle to independently perform the entire driving task, adapting to dynamic situations without human intervention.

Potential AI Techniques

Convolutional Neural Networks (CNNs): CNNs are used for image recognition and object detection, allowing the vehicle to perceive its surroundings.

YOLO (You Only Look Once): YOLO is a real-time object detection algorithm that helps the vehicle detect and classify objects on the road.

Simultaneous Localization and Mapping (SLAM): SLAM is used to map the environment and determine the vehicle’s location.

Model Predictive Control (MPC): MPC helps plan the vehicle’s path and control its movements.

Finite State Machines (FSMs): FSMs manage the vehicle’s behaviour based on different driving scenarios.

Kalman Filters: Kalman filters are used for sensor fusion and estimating the vehicle’s state.

Autonomic System

Distributed Routing and Obstacle Management System (DROMS) – This system operates as a decentralized autonomic system. By continuously analysing distributed environmental data (e.g., congestion, unexpected obstacles), the network of delivery robots collaboratively adapts delivery routes. This distributed decision-making optimizes efficiency and ensures uninterrupted service.

This DROMS leverages AI for self-management and real-time collaboration among delivery robots. It continuously analyses distributed environmental data and independently adapts delivery routes for each robot. DROMS showcases self-management capabilities by continuously adapting its behaviour to the environment without human intervention. However, pre-programmed algorithms likely define its core functionalities. While it can optimize routes and adapt to dynamic situations within the capabilities of these algorithms, it may need external intervention to change its core programming fundamentally.

Potential AI Techniques

YOLOv5: YOLOv5 can be used with a communication protocol that allows robots to share obstacle information, helping the system detect and avoid obstacles in real time.

Multi-Agent Deep Deterministic Policy Gradient: This technique allows the robots to learn and optimize their routing strategies through collaboration.

Soft Actor-Critic (SAC): SAC helps in optimizing the actions taken by the robots in uncertain environments.

Variational Autoencoders (VAEs): VAEs are used for anomaly detection and understanding the surrounding environment.

3.2.4 Cognitive Automation Systems

Cognitive Fraud Detection – This system employs AI to analyse patterns in financial transactions and detect fraud.

This Cognitive Fraud Detection system leverages AI algorithms to analyse large volumes of financial data. This analysis mimics the cognitive skills traditionally employed by human fraud analysts in pattern recognition and anomaly detection. By identifying suspicious transactions that might indicate fraudulent activity, the system automates tasks that previously required human expertise, improving overall efficiency and reducing the burden on fraud analysts.

Potential AI Techniques:

Deep Neural Networks (DNNs): DNNs detect complex patterns in transaction data to identify potential fraud. Combined with other techniques that offer clear explanations, this creates a more understandable cognitive fraud detection solution.

Anomaly Detection Algorithms: These algorithms identify unusual patterns in transaction data that may indicate fraudulent activity.

Natural Language Processing (NLP): NLP is used to analyse text data, such as transaction descriptions, to identify suspicious behaviour.

Clustering Algorithms: Clustering algorithms group similar transactions together to identify outliers and patterns indicative of fraud.

4. Conclusion

Intelligent automation includes various categories of systems, each with specific capabilities and sophistication levels. Augmented systems augment human activities, autonomous systems operate independently, autonomic systems manage themselves dynamically, and cognitive systems mimic human cognitive functions. The selection of the most suitable intelligent automation approach for a solution depends on several factors, such as the specific needs of the application (use cases), the maturity of the relevant technologies, and cost considerations. Understanding the distinctions and overlaps between these categories is crucial for navigating the complexities of intelligent automation. As AI advances, the lines between these categories may blur further. While predicting a single dominant intelligent automation category is difficult, the future likely holds a convergence of these categories. This convergence will likely be driven by the increasing adoption of hybrid approaches that combine functionalities from various categories to address the specific data needs of different applications.

TCS Launches New IoT Engineering Lab in Ohio

TCS

Tata Consultancy Services (TCS) has recently announced the launch of the Bringing Life to ThingsTM Lab in Cincinnati, Ohio.

The lab is designed to support the rapid prototyping, experimentation, and large-scale implementation of AI, GenAI, and IoT engineering solutions, enabling TCS to assist clients in bringing innovative solutions to life faster and more efficiently.

Spanning across 3,000-square feet, this lab will advance the deployment of TCS’ comprehensive suite of IoT solutions, including TCS Clever EnergyTM, TCS Digital Manufacturing PlatformTM (DMP) and TCS DigifleetTM, among others.

These solutions cater to various industries including health care and life sciences, manufacturing, energy and resources, consumer packaged goods, and more. The lab will also help businesses collaborate and co-innovate, integrating physical assets, partner technologies, and customer challenges to create new offerings and solutions.

The lab also offers TCS’ innovative solutions, including the TCS Neural Manufacturing solution, which provides autonomous and intelligent capabilities for factories, and the TCS connected healthcare platform, providing solutions for personalized medicine.

“On the demand side, structural shifts such as energy transition, supply chain relocation and AI are requiring significant new production capacity in the United States that is connected, intelligent and autonomous by design. On the supply side, Intelligent Edge, powered by advances in connectivity, sensor and AI technology, is reinventing customer experience, personalized products and connected manufacturing.

“TCS’ investment in the Bringing Life to Things Lab in Ohio will help our clients bridge the traditional divide between operational and digital technology by rapidly turning their ideas into minimum viable products that reimagine their value chain at scale. With its strategic location in Cincinnati, home of TCS’ largest American delivery center, the lab is well positioned to tap into the area’s tech talent to help our customers across North America,” Amit Bajaj, President-North America, TCS, said.

The post TCS Launches New IoT Engineering Lab in Ohio appeared first on AIM.

Apple staged the AI comeback we’ve been hoping for — but here’s where it still needs work

Tim Cook WWDC 2024 AI

During WWDC 2024, Apple introduced the Apple Intelligence platform, which brings generative artificial intelligence (AI) and machine learning to the forefront. This platform utilizes large language and generative models to handle text, images, and in-app actions.

This initiative integrates advanced AI capabilities across the Apple ecosystem to transform device interaction. However, current iPhone and iPad users might need to upgrade their devices to take full advantage of these benefits.

Also: Everything Apple announced at WWDC 2024, including iOS 18, Siri, AI, and more

In a previous article, I recommended several key steps for Apple to stay competitive in the AI race. Let's see how Apple's announcements measure up to these recommendations and where there is room for improvement.

What Apple Intelligence will bring to the company's operating system platforms

AI on the device and in the cloud

Apple Intelligence brings powerful generative models to iPhone, iPad, and Mac. On-device capabilities require an A17 Pro chip, limiting them to iPhone 15 Pro and Pro Max users for enhanced security and privacy. Similarly, only iPads with M-series chips (like the latest iPad Air and iPad Pro) and Macs running Apple Silicon will be compatible. Many users with older devices or non-Pro models will miss these advanced features.

For more demanding tasks, Apple introduced Private Cloud Compute (PCC), a groundbreaking cloud intelligence system designed for private AI processing. PCC extends the industry-leading security and privacy of Apple devices into the cloud, ensuring that personal user data sent to PCC isn't accessible to anyone other than the user — not even Apple. Built with custom Apple Silicon and a hardened operating system designed for privacy, PCC represents a generational leap in cloud AI compute security.

In terms of AI infrastructure, Apple also introduced its Foundation Models, including a ~3 billion parameter on-device language model and a larger server-based model running on Apple Silicon servers within the company's data centers. These models are fine-tuned for specialized tasks and optimized for speed and efficiency.

Also: Every iPhone model that will get Apple's iOS 18 (and which ones won't)

Room for Improvement: Apple fell short in AI infrastructure leadership by not announcing AI-accelerated server appliances at the edge, which would allow less capable devices, like the base iPhone 15 and earlier iOS 18-supported models, to use Apple Intelligence's more advanced features. While the hybrid AI model with on-device and PCC is a step in the right direction, AI-accelerated edge network devices were not mentioned to enhance performance and reduce latency. Apple is typically not transparent about deploying resources in its data centers, so it may plan to deploy these appliances at the edge without disclosing specifics. While the short list of Responsible AI Principles that the company has documented here is a good start, an AI ethical disclosure statement along the lines of what Adobe is doing would further bolster trust and transparency.

Embracing third-party AI providers

Apple has dipped its toes into ChatGPT integration, indicating a willingness to integrate third-party services and partner with multiple AI providers. During the keynote, Apple said it would partner to allow third-party large language models (LLM) in addition to OpenAI ChatGPT (free, Plus, and presumably Enterprise) but did not name those models. Potential models include Microsoft Copilot, Google Gemini, Meta Llama 3, Amazon Titan, and Hugging Face, among many others.

Also: How to install iOS 18 developer beta (and which models support it)

Room for improvement: While Apple's intention to be LLM-agnostic is a positive sign for the company's AI strategy, I had hoped for a broader embrace of third-party platforms, particularly health, finance, and education, with AI integration. However, this shift will have to come with developers embracing the new SiriKit, App Intents, Core ML, Create ML, and other APIs. Deeper integration with specialized AI providers could significantly enhance Apple Intelligence's functionality and versatility.

Smart notifications and writing tools

Smart notifications in Apple's operating systems will leverage on-device LLMs to sift through the noise and ensure that only the most important alerts make it through. This is part of the new Reduce Interruptions Focus, which shows users key details for each notification. System-wide writing tools can write, proofread, and summarize text for users, from short messages to long blog posts, with the Rewrite feature providing multiple versions of text based on the intended audience.

Also: You can finally schedule messages on the iPhone. Here's what to know

Room for improvement: Building on the Reduce Interruptions Focus, further development in proactive assistance features that anticipate user needs based on past behavior and context would be beneficial.

AI image generation and Genmoji

Apple has opened up a world of creative possibilities by integrating the Image Playground API into all apps. Users can create AI-generated images in three styles: Sketch, Animation, and Realism. Imagine creating and sharing these images directly within Messages or Pages — it's a game-changer. In Notes, a new Image Wand tool can generate images based on the current page content. Genmoji allows users to create custom emojis, adding a personalized touch to communications.

Room for improvement: Providing more granular controls and customization options for the generated images and Genmojis, such as fine-tuning styles and attributes, could cater to more specific user preferences. Additionally, implementing features that suggest image enhancements or emoji creations based on user activity and context could further streamline the creative process.

Enhanced Siri and task automation

Siri, the voice assistant we've come to know and tolerate, is finally getting a much-needed upgrade. With advanced natural language processing (NLP), Siri can understand users even if they stutter and maintain conversational context, making interactions more seamless and intuitive. You can now type requests to Siri, a feature bound to be a hit in noisy environments. Siri's new look, with a light wrapping around the screen edges when tapped, adds a modern touch.

Siri's improved contextual awareness allows it to handle tasks like finding specific photos, playing podcasts, and retrieving shared files based on user commands. The assistant can pull driver's license information from a photo and input it into a form. In Photos, the AI can use NLP to search for specific photos or video clips and remove distracting objects with the new Clean Up tool.

Also: Here's how Apple's keeping your cloud-processed AI data safe

The new Reduce Interruptions feature ensures that only the most important notifications get through based on your activity. On the iPad, handwriting optimization (Smart Script) and mathematical interpretation capabilities make it easier to write equations with the Apple Pencil and have them solved by the Calculator app. In Notes, the Image Wand transforms rough sketches into polished images, and you can record and transcribe audio with text summaries generated by Apple Intelligence. A clean-up tool removes unwanted objects in Photos, and Search in Videos helps find specific snippets.

Apple Intelligence also performs actions within apps on behalf of the user. It can open Photos and show images of specific groups based on a request. In Mail, priority messages are highlighted with summaries for quick insight. Notes users can record, transcribe, and summarize audio, creating summary transcripts of calls with automatic notifications to participants.

Room for improvement: While Apple has made significant progress, future updates could further enhance Siri's capabilities, automate more complex tasks, and provide deeper personalization across the Apple ecosystem.

AI capabilities across Apple products

Lastly, enhancing AI capabilities across all Apple products, including Siri, Apple Music, Apple News, Health, Fitness+, TV, and HomeKit, was a major recommendation. While Apple's AI features are integrated across devices, the specific enhancements for services like Apple Music and HomeKit were limited, at least as addressed in the WWDC keynote.

Also: What is Apple Intelligence? How the iPhone's on-device and cloud-based AI works

Room for improvement: We also haven't heard anything about HomePod or Apple TV with Apple Intelligence, although neither of these products has the computational power to perform on-device generative AI. Similarly, there were no mentions of new AI capabilities in WatchOS. While these devices might be able to use some of the cloud capabilities of Apple Intelligence, this was not brought up in the keynote. Additionally, with its M2 chip, the Vision Pro is powerful enough to handle Apple Intelligence on-device features. Still, the keynote did not discuss what would be coming to that device specifically.

The developer story

At WWDC 2024, Apple is doubling down on empowering developers with the tools and APIs they need to unlock Apple Intelligence's full potential through an extensive lineup of developer sessions, highlighting Apple's commitment to fostering a vibrant AI development ecosystem.

These sessions will offer deep dives into optimizing and implementing machine-learning models on iOS, iPadOS, and MacOS. The goal is to equip developers with the knowledge to harness Apple's advanced AI capabilities.

One of the standout features is, of course, the enhanced Siri. Developers will learn how to integrate their apps with SiriKit, using its improved NLP to create more seamless and intuitive user interactions. App Intents will also be a key focus, allowing developers to bring their app's core features directly to users through Siri and other system services.

Also: Apple coders, rejoice! Your programming tools just got a big, free AI boost

With Apple Silicon leading the charge, sessions will guide on optimizing machine learning and AI models specifically for these powerful chips. This content includes deploying models with Core ML and supporting real-time ML inference on the CPU. Updates to Create ML will also be covered, focusing on training models more efficiently and effectively.

Another major highlight will be Apple's new writing tools, which can proofread, summarize, and rewrite text. Developers will be shown how to incorporate these tools into their apps, offering users advanced text manipulation features.

The creative potential of Genmoji will also be explored, with sessions on how to generate custom emojis to enhance user engagement and personalization.

Apple is pushing the boundaries of performance with sessions on accelerating machine-learning tasks using Metal, Apple's graphics framework. Developers will also discover new capabilities within Swift and the Vision framework, crucial for integrating advanced image recognition features.

Finally, the new Translation API will be unveiled. It will help developers build apps that seamlessly translate text and speech, making applications more inclusive and accessible.

Also: Apple unveils an on-device AI image generator for iPhone, iPad, and Mac

By equipping developers with these resources, Apple is ensuring that the potential of Apple Intelligence can be fully realized across its ecosystem, driving innovation and enhancing user experiences.

Did Apple go far enough with AI improvements?

Despite the exciting announcements, there are still some gaps. Apple introduced new APIs and enhancements, and the upcoming developer sessions will provide the necessary tools, frameworks, and training. However, there was a missed opportunity for broader third-party integration, especially in key areas such as health and finance. After developers kick the tires on Apple Intelligence this fall, these integrations may be expected later, post-iOS 18 release.

While enhancements across Apple services like Apple Music, News, Health, Fitness+, and HomeKit were implied, they were not extensively covered. We expect these details to emerge with later iOS 18 betas.

Apple's WWDC 2024 announcements align with several key recommendations but fall short in broader third-party integration, proactive assistance, and ethical AI practices. However, the extensive developer sessions planned for the conference suggest that Apple is serious about equipping developers with the tools and knowledge they need to use these new AI capabilities.

Addressing the remaining gaps could enhance Apple's competitive position in the AI race, providing a more robust and user-centric AI ecosystem. By continuing to innovate and improve in these areas, Apple can set new benchmarks and lead the future of AI-driven technology.

Apple

‘Creativity in Indian Open Source will Flourish, Blooming a Spring of AI Innovations and Startups’

github india

GitHub CEO Thomas Dohmke is optimistic that India will emerge as a leader in the age of AI. According to a GitHub report released last year, India already has a large developer base, and the number of developers in the country is set to surpass developers in the US by 2027.

Moreover, GitHub revealed that there are over 15.4 million developers in India building on GitHub, growing 33% year-over-year (YoY).

While speaking at the recently held GitHub Galaxy 2024 event in Bengaluru, Dohmke said, “India will not just be a global leader, but the global leader in the age of AI.

Children and adults alike will learn to code in their native language, leading to a prolonged groundswell of developers.

“Creativity in Indian open source will flourish, blooming a spring of AI innovations and startups. And Indian businesses will carve a competitive advantage in the global market, as their developers build software with an accelerated speed of code.”

Dohmke added that India’s burgeoning developer community, combined with the newfound possibilities of AI, will not only accelerate digital transformation, but will drive immense human and economic progress for India.

“Developers are already 55% faster when using Copilot, meaning the software economy, valued globally in the trillions, is moving rapidly faster because of AI.

In a way that no other country on Earth can claim: India is on the brink of a great convergence between the world’s largest population of developers and the newfound possibility of AI.

If enabled, this great convergence will generate a consequential economic boom in India that could be felt around the world for generations to come.”

Today, GitHub Copilot has 1.8 million paid subscribers, and has been adopted by over 50,000 organisations globally.

In India, GitHub revealed that it continues to see the adoption of its Copilot-powered platform by companies in every industry, including Cognizant, MakeMyTrip, Paytm, Swiggy, Glance, and Air India, among others.

Copilot is also being leveraged by public sector customers such as the Government e-marketplace (GeM).

GitHub Partners with Infosys to Launch Centre of Excellence

Recently, GitHub also partnered with Infosys to launch the first GitHub Center of Excellence in Bangalore. This initiative aims to leverage AI and advanced software solutions to drive global economic growth.

This collaboration promises to enhance the speed and efficiency of software production worldwide by integrating GitHub Copilot across Infosys’ developer teams and extending its capabilities to its clients. The partnership represents a generational opportunity for Global Systems Integrators (GSIs) to spearhead advancements in the AI and software sectors.

“At Infosys, we’re passionate about unlocking human potential, and GitHub is a strategic partner in this endeavor. GitHub Copilot is empowering our developers to become more productive, efficient, and enabling them to focus more on value creating tasks.

Generative AI is transforming every aspect of the software development lifecycle, and using Infosys Topaz assets, we are accelerating Gen AI adoption for our clients. We are excited to work with GitHub to unlock this technology’s full potential and deliver client relevant solutions,” Mohammed Rafee Tarafdar, CTO at Infosys, said.

The post ‘Creativity in Indian Open Source will Flourish, Blooming a Spring of AI Innovations and Startups’ appeared first on AIM.

The top AI features Apple announced at WWDC 2024

Apple Software Engineering SVP Craig Federighi, seen presenting Apple Intelligence at WWDC 2024

Today marked the kickoff of Apple’s WorldWide Developer Conference (WWDC), the annual event where Apple announces some of the biggest features headed to its devices, apps and software. And this year’s WWDC is a doozy. Thanks to Apple’s newfound — and heavy — investment in generative AI tech, the company had loads to showcase on the AI front, from an upgraded Siri to AI-generated emoji.

Apple announced a deal with OpenAI to bring ChatGPT, OpenAI’s AI-powered chatbot experience, to a range of its devices. It introduced new photo editing tools to remove objects and people from photos. And it rolled out an AI capability to proofread, rewrite and summarize text across content, including notes to emails.

Here’s a roundup of a few of the more noteworthy Apple AI announcements from WWDC 2024.

New Siri

Siri got a makeover courtesy of Apple’s overarching generative AI push this year, called Apple Intelligence.

Thanks to Apple Intelligence, Siri — which has a revamped look, with a new icon and glowing indicator light around the edges of a device’s screen — can now handle stumbles in speech and better understand context. You can now also type to Siri, and it can answer questions, including questions about how to use your iPhone, iPad or Mac.

Image Credits: Apple

Soon, Siri will become even more capable with onscreen awareness and the ability to take action in and across apps — so you can ask Siri to, for example, “make this photo pop” and then “add this photo” to another app. During the WWDC keynote on Monday, Apple gave the example of Siri finding a photo of your license, extracting your ID number and entering it into a web form for you.

To take advantage of the new Siri, you’ll need an Apple device that supports Apple Intelligence — specifically the iPhone 15 Pro and devices with M1 or newer chips.

ChatGPT integration

Apple’s bringing ChatGPT to Siri and other first-party apps and capabilities across its operating systems.

Siri users will soon be able to route questions to ChatGPT for “expertise” where it might be helpful, Apple says. You can include photos with the questions you ask ChatGPT via Siri, or ask questions related to your docs or PDFs.

Image Credits: Apple

Apple’s also integrated ChatGPT into OS-wide tools such as Writing Tools (powered by Apple Intelligence), which allows you to create content with ChatGPT — including images — or ask an initial idea and send it to ChatGPT to get a revision or variation back.

ChatGPT integrations will arrive on iOS 18, iPadOS 18 and macOS Sequoia later this year, Apple says, and will be free without the need to create a ChatGPT or OpenAI account. Initially, they’ll be powered by GPT-4o, OpenAI’s recently introduced flagship generative AI model.

Genmoji and Image Playground

Genmoji, coming soon to iOS 18 on devices that support Apple Intelligence, lets you create AI emoji-like images of anyone in your photo library — or just custom emoji. Genmoji can be used as a sticker for reacting to messages with a Tapback or inline with your messages, Apple says.

A separate new image generation capability allows iPhone users to create AI images of people they’re messaging with. Apple Intelligence will have an understanding of who you’re chatting with, Apple says — so if you want to personalize the chat with a custom AI image, you can do so on the fly.

Image Credits: Apple

There’s also Image Playground, a new image-generating feature that works across apps like Notes, Freeform, Keynote and Pages.

Available as a standalone app and API for developers, Image Playground allows users to create images using concepts such as themes, costumes, accessories, places and more. You select the themes you want to include and, minutes later, Image Playground creates a preview of your image.

AI photo editing

Apple’s new Clean Up tool, built into the upgraded Photos app, removes unwanted people and objects from pics.

Clean Up can be used on any picture in Photos by circling or highlighting the thing — or person — to be removed. Relying on AI, Clean Up removes the selected element and replaces it with contextually-aware pixels to try to make it seem like the thing (or person) was never there to begin with.

Image Credits: Apple

On the subject of Photos, it’s now better organized thanks to AI. Coming in iOS 18, Photos will show collections of photos automatically organized by topics like time, people, favorite memories, trips and more. And you’ll be able to use more specific terms to search across photos.

Transcribed calls

Coming soon to iPhone 15 Pro and newer models, iOS will optionally record and transcribe your phone calls.

The feature — which must be enabled manually, and which informs the party on the other end of the line that the call is being recorded so as to not run afoul of privacy laws — transcribes what’s said during the call and then provides a summary of the key points discussed in iOS’ Notes app.

Apple noted (no pun intended) during the keynote on Monday that you’ll also be able to record and transcribe audio from within the Notes app.

Meta AI’s Obsession with Animal Themes Becomes Clearer with Husky, a Unified Open-Source Language Agent

meta husky language agent

In yet another animal-themed endeavour, researchers at Meta, Allen Institute for AI and Washington University have proposed a new open-source language agent designed for complex, multi-step reasoning tasks, christened Husky.

Unlike existing models that focus on specific domains, the researchers claim that Husky operates over a unified action space. This means that it can handle diverse challenges such as numerical, tabular, and knowledge-based reasoning, as opposed to specialised agents that can focus on specific challenges like agents for coding.

Husky iterates between generating actions to solve tasks and executing these actions using expert models, constantly updating its solution state. This iterative has proven a key point of distinction, allowing Husky to outperform previous agents across 14 datasets used for evaluation.

Read the full paper here.

Focus on Mixed-Tool Reasoning

One of Husky’s key innovations is its capability to manage mixed-tool reasoning. It excels in tasks that require retrieving missing knowledge and performing numerical calculations, achieving performance on par with, or exceeding, state-of-the-art models like GPT-4.

The researchers have also introduced HuskyQA, an evaluation set specifically designed to stress test language agents on mixed-tool reasoning tasks, particularly to perform numerical reasoning and retrieve missing knowledge.

Language agents perform complex tasks by using tools to execute each step precisely. However, most existing agents are based on proprietary models or designed to target specific tasks, such as mathematics or multi-hop question answering.

“Our experiments show that Husky outperforms prior language agents across 14 evaluation datasets,” the researchers stated.

While it is true that AI agents have gained significant popularity over the last couple of years, the introduction of an agent capable of reasoning over a number of complex tasks means that agent capabilities are quickly expanding.

The post Meta AI’s Obsession with Animal Themes Becomes Clearer with Husky, a Unified Open-Source Language Agent appeared first on AIM.