How Gen AI is transforming education

In recent years, Artificial Intelligence has experienced numerous innovations. One of the most significant is the introduction of Gen (Generative) AI technology. This Artificial Intelligence technology focuses on creating new, original, and valuable content against the given prompt.

That’s the reason why it quickly became popular in numerous fields including Education. Gen AI has just positively transformed the learning environment in educational institutes in several ways. In this blog post, we are going to explore some of the most major ways in detail, so stick around with us till the end.

4 Major ways Gen AI is transforming education

Below we have explained four ways through which Generative AI is revolutionizing the field of education in positive ways.

Assistance in better content creation for courses

How Gen AI is Transforming the Educational Ways?

We all know that creating content for courses is fundamental for any educational institute in the world. It greatly contributes to enhancing students’ academic performance. For instance, well-crafted course material will keep the students interested in it, resulting in better learning.

Previously, creating perfect content for learning courses required a significant amount of time and effort for the teacher or staff. Fortunately, that’s not the case now, after the introduction of Generative AI.

There are numerous tools available on the internet such as a Paragraph generator. It works on Gen-AI technology and allows teachers to create compelling content on any course topic to facilitate students’ learning process. Not only this, but the tool also allows the creation of course content based on the student’s specific writing tone/style such as Simple, Formal, etc.

Another good Gen AI course content creation tool can be a Summary generator. Tools like this one will allow educational staff to craft high-quality summaries of courses in no time. Students can interact with the summaries to quickly get an idea about the entire course.

So, by utilizing these types of tools, the content creation process for courses will not only become quicker but also more effective in terms of quality. This will ultimately lead to enhancing students’ academic performance.

Routine task automation

How Gen AI is Transforming the Educational Ways?

Just like any other field, education also has numerous routine repetitive tasks. One such task is Grading. And we all know that every class has several students, so carefully evaluating each student’s work and providing constructive feedback will require full energy, time, and focus, which may not be always possible for the teachers.

So, in this sort of scenario, Gen AI tools like CoGrader will be their companion. It will automatically assign grades to the academic material. Not just this, it also provides a detailed explanation or feedback about the grading.

This way, a lot of time and effort for professors will be saved, and students will also get accurate feedback for better learning.

Apart from grading, timetable creation for teachers is a routine yet complex task for the management. But thankfully, Gen AI tools such as EduTime. It will automatically generate timetables in no time.

Promoting personalized learning

How Gen AI is Transforming the Educational Ways?

Do you know? by nature, each student has different learning capabilities. This is because personalized learning is being adopted and promoted all around the world. Gladly, Gen AI also plays a key role. Let us explain how.

There are numerous AI-based virtual tutors or platforms available such as ALEKS by McGraw Hills. It is a learning platform that constantly monitors student’s learning behavior, capability, and knowledge state, and then provides guidelines accordingly for better learning.

It is also important to note that personalized learning is possible with written content, images, and videos. And researches suggests that the human mind learns better from visuals.

For the creation of high-quality visuals, many notable Gen AI tools are available, such as DALL-E 3 by OpenAI. It allows both teachers and students to create fully personalized images to support their overall learning process.

So, by promoting personalized learning through these sorts of tools, both professors and educational institutes can make sure that their students are learning at their full potential.

Customized data-based feedback

How Gen AI is Transforming the Educational Ways?

This is the final way through which Generative AI is transforming the field of education. For maximum academic performance, institutions must provide regular performance feedback reports to students. So, they can find where they are doing great and where they need improvements.

Providing detailed feedback consistently to each student will be difficult for a teacher. So, Gen AI will also help them in this regard. Let us explain how.

There are several advanced tools available on the internet such as School Analytics by PowerSchool. Tools like this one will provide teachers and staff with a dedicated dashboard that will generate useful information about students’ performance that can be used as feedback reports.

Final words

Generative AI is a new invention in the field of Artificial Intelligence. It has transformed numerous fields, especially education in several ways such as on-demand quality content creation for learning purposes, assistance in personalized learning, and many more. In this article, we have explained all these ways in complete detail.

Snowflake, NVIDIA Join Forces to Enhance Custom AI Applications

Snowflake and NVIDIA Join Forces to Enhance Custom AI Applications

Snowflake announced its collaboration with NVIDIA during the Snowflake Summit 2024. This partnership aims to empower customers and partners to develop bespoke AI data applications within Snowflake, leveraging NVIDIA’s AI technology.

This collaboration sees Snowflake integrating NVIDIA AI Enterprise software, incorporating NeMo Retriever microservices into Snowflake Cortex AI, Snowflake’s managed LLM and vector search service. This integration allows organisations to link custom models to varied business data, delivering precise responses seamlessly.

Additionally, Snowflake Arctic, an open, enterprise-grade LLM, now supports NVIDIA TensorRT-LLM software, enhancing performance. Arctic is also accessible as an NVIDIA NIM inference microservice, broadening developers’ access to its capabilities.

As enterprises strive to maximise AI’s potential, the need for data-driven customisation grows. The Snowflake-NVIDIA collaboration facilitates rapid development of specific AI solutions, benefiting businesses across various sectors.

“Pairing NVIDIA’s full stack accelerated computing and software with Snowflake’s state-of-the-art AI capabilities in Cortex AI is game-changing,” stated Sridhar Ramaswamy, CEO of Snowflake. “Together, we are unlocking a new era of AI where customers from every industry and every skill level can build custom AI applications on their enterprise data with ease, efficiency, and trust.”

“Data is the essential raw material of the AI industrial revolution,” said Jensen Huang, founder and CEO of NVIDIA. “Together, NVIDIA and Snowflake will help enterprises refine their proprietary business data and transform it into valuable generative AI.”

Notable NVIDIA AI Enterprise software capabilities offered in Cortex AI include:

  • NVIDIA NeMo Retriever: Provides accurate and high-performance information retrieval for enterprises.
  • NVIDIA Triton Inference Server: Facilitates the deployment, running, and scaling of AI inference for various applications on any platform.
  • NVIDIA NIM inference microservices, part of NVIDIA AI Enterprise, can be deployed within Snowflake as a native app using Snowpark Container Services. This setup allows organisations to deploy foundational models directly within Snowflake easily.

Quantiphi, an AI-first digital engineering firm and ‘Elite’ partner with both Snowflake and NVIDIA, exemplifies this innovation. Quantiphi’s Snowflake Native Apps, baioniq and Dociphi, are designed to enhance productivity and document processing within specific industries. These apps, developed using the NVIDIA NeMo framework, will be available on Snowflake Marketplace.

The Snowflake Arctic LLM, launched in April 2024 and trained on NVIDIA H100 Tensor Core GPUs, is now available as an NVIDIA NIM. This makes Arctic accessible in seconds, either via the NVIDIA API catalogue with free credits or as a downloadable NIM, offering flexible deployment options.

Earlier this year, Snowflake and NVIDIA expanded their collaboration to create a unified AI infrastructure and compute platform in the AI Data Cloud. Today’s announcements mark significant advancements in their joint mission to help customers excel in their AI initiatives.

The post Snowflake, NVIDIA Join Forces to Enhance Custom AI Applications appeared first on AIM.

Intel Finally Unveils Lunar Lake AI Chip for Copilot+ PC

Intel Finally Unveils Lunar Lake AI Chip for Copilot+ PC

At Computex 2024, Intel has officially announced details about its forthcoming Lunar Lake chips, set to power Copilot+ AI PCs this fall. These new chips will deliver up to 48 TOPS (tera operations per second) of AI performance, supported by an upgraded neural processing unit (NPU).

This represents a significant leap from Intel’s previous Meteor Lake chips, which offered a 10 TOPS NPU, and positions Intel in the ongoing AI performance race against competitors like AMD and Qualcomm.

Unveiled at Computex, Intel’s Lunar Lake chips promise substantial advancements. Alongside the impressive AI performance, they will feature a new Xe2 GPU, providing 80 percent faster gaming performance compared to the previous generation.

Additionally, an AI accelerator in the chip will contribute an extra 67 TOPS of performance. Despite these enhancements, Intel faces competition from AMD’s Ryzen AI 300 chips, launching in July with 50 TOPS NPUs, and Qualcomm’s Snapdragon X Elite and X Plus chips. These competitors highlight the aggressive push within the AI PC market.

Intel

In a notable development, Lunar Lake chips will include on-board memory, akin to Apple Silicon. Options of 16GB or 32GB of RAM will be available, but like Apple’s design, these will not be upgradable.

This integration allows for a reduction in latency and a 40 percent decrease in system power usage, although it limits users needing more RAM until Intel’s next chip family, Arrow Lake, is released.

Lunar Lake will also feature eight cores, split between performance and efficiency (P-cores and E-cores). The chip includes an “advanced low-power island” for efficiently managing background tasks, contributing to a claimed 60 percent improvement in battery life over Meteor Lake.

Despite these enhancements, Intel faces competition from AMD’s Ryzen AI 300 chips, launching in July with 50 TOPS NPUs, and Qualcomm’s Snapdragon X Elite and X Plus chips. These competitors highlight the aggressive push within the AI PC market.

Qualcomm’s chips, known for their power efficiency, reportedly achieve over 20 hours of battery life on Copilot+ Surface devices, although independent testing is pending.

Connectivity for Lunar Lake will include Wi-Fi 7, Bluetooth 5.4, PCIe Gen5, and Thunderbolt 4. However, Intel has not yet committed to integrating Thunderbolt 5, which is expected to launch later this year.

During a media briefing ahead of Computex, Intel shared benchmark results, indicating Lunar Lake’s superiority over Meteor Lake in tasks like running Stable Diffusion. Lunar Lake completed 20 iterations in 5.8 seconds, compared to 20.9 seconds for Meteor Lake, despite drawing slightly more power.

Specific chip models and deeper specifications for Lunar Lake are yet to be disclosed, but Intel’s latest offerings mark a significant stride in AI and PC performance, setting high expectations for their launch this fall.

The post Intel Finally Unveils Lunar Lake AI Chip for Copilot+ PC appeared first on AIM.

Introduction to autonomous agents from a developer perspective – Part one

Introduction to autonomous agents from a developer perspective – Part one

What are autonomous AI agents?

Autonomous AI agents are systems capable of performing tasks without human intervention.

Agents have been around in various incarnations. Most recently, an element of autonomy was achieved by reinforcement learning(RL). However, it is still hard to deploy RL beyond virtual environments and games. Autonomous AI agents (called agents henceforth in this document) are more complex. They are designed to perceive their environment, make decisions based on their perceptions and pre-programmed knowledge, and execute actions to achieve specific goals. Agents are also based on LLMs which make them more efficient.

In various forms, conventional AI agents (predating the LLM-based autonomous agents) are already used in some capacity. For example in self-driving cars, robotics, Virtual assistants like Alexa, Autonomous drones, or in algorithmic trading.

However, the real potential of agents as we discuss here, lies in their capacity to solve problems at a higher level of abstraction. In simple terms, if you want to book a holiday to Greece, then the AI is able to split the high-level task into subtasks and autonomously execute these tasks to create an overall solution. It is this capability of autonomous AI agents to execute a high-level task that makes agent technology significant.

In this blog post series, we explore autonomous agents from the perspective of a developer.

Workflow of autonomous agents

Autonomous agents involve a series of steps

  1. Sensing and perception where the agent gathers data from its environment using various sensors. The raw sensor data is preprocessed to eliminate noise and extract relevant features.
  2. Based on the sensing, the agent constructs a model of its environment, which could be a physical map for a robot or a conceptual map for a software agent. It also determines the context, including its own state in the environment
  3. The agent identifies its objectives based on predefined goals or learned behaviors. The agent then develops a plan to achieve its goals which includes evaluating different actions to evaluate their effectiveness in achieving the goal.
  4. The agent chooses and performs the best action based on its decision-making process.
  5. The agent gathers feedback from the environment about the results of its actions, which could be immediate sensor data or delayed outcomes.
  6. The agent updates its models and decision-making processes based on the feedback, which might involve updating machine learning models or refining rules.
  7. The agent may need to communicate with other agents or humans, which could involve sending data, reporting status, or coordinating actions.

From a developer’s perspective, creating and deploying an autonomous AI agent involves a series of systematic steps, encompassing design, development, testing, and deployment.

Here is a general workflow for deploying agents:

  1. Problem Definition and Requirements Gathering
  2. Design
  3. Data Collection and Preparation
  4. Model Development
  5. Integration and System Development
  6. Implementation of Learning and Adaptation
  7. Testing and Validation
  8. Deployment
  9. User Interaction and Feedback (if applicable)
  10. Iteration and Improvement

But this flow hides the complexity of agent development

Andrew Ng describes the four design patterns of agentic workflows as Reflection, Tool use, Planning and Multi-agent collaboration.

We can expand on these four design patterns as follows:

1. Reflection: Reflection refers to the ability of an AI agent to think about its own thinking process. This includes evaluating its actions, learning from experiences, and adapting its strategies based on past performance. Reflection enables agents to improve over time, make better decisions, and avoid repeating mistakes.

Key Aspects of reflection include: Self-Monitoring: The agent monitors its own performance and processes.; Learning from Experience: Using techniques like reinforcement learning, the agent learns from feedback received from its actions and Adaptive Behavior: The agent modifies its strategies and behaviors based on past outcomes and new information.

Examples of reflection include: Autonomous Vehicles that Continuously analyse driving decisions and update the driving model based on new data and Game Playing Agents that evaluate past games to improve strategies and decision-making in future games.

2. Tool Use: Tool Use involves AI agents leveraging external tools or resources to achieve their goals. The agent can use APIs, databases, and other software tools to obtain information or perform actions; The agent delegates specific tasks to specialized tools, focusing its own processing power on decision-making and coordination; Seamlessly integrating external tools into the agent’s workflow to enhance functionality.

Examples include Robotic Process Automation (RPA.

3. Planning: Planning refers to the ability of an AI agent to formulate a sequence of actions to achieve a specific goal. Planning involves anticipating future states, considering various actions and their outcomes, and selecting the optimal sequence to reach the desired objective.

Key aspects of planning include Goal Setting – Defining clear objectives for the agent to achieve; Action Sequencing – Developing a series of steps that lead from the current state to the goal state; Contingency Handling – Planning for alternative actions in case of unexpected changes or failures.

Examples include Robotics – An autonomous robot planning a path to navigate through an environment while avoiding obstacles; .Supply Chain Management: Planning logistics and inventory management to ensure timely delivery of goods.

4. Multi-Agent Collaboration

Multi-Agent Collaboration involves multiple AI agents working together to achieve a common goal. This pattern is crucial in scenarios where tasks are too complex or large for a single agent to handle. Collaboration requires communication, coordination, and sometimes negotiation among agents.

Key Aspects include: Communication: Agents exchange information to align their actions and strategies; Coordination: Agents synchronize their actions to avoid conflicts and ensure efficient task execution.; Negotiation: In some cases, agents need to negotiate to resolve conflicts or distribute resources.

Examples of multiagent collaboration include: Swarm Robotics: Multiple robots collaborating to perform tasks like search and rescue, environmental monitoring, or construction. Distributed Computing: Multiple AI systems working together to solve large-scale computational problems, such as data analysis or simulations.

In the next section, we will discuss implications for developers

The 5 Best Udemy Courses That Are Worth Taking in 2024

Udemy is an online course platform where professionals and aspiring workers can find training on a wide variety of subjects. Here are five of the best Udemy courses for workers in the tech field who want to start new careers or add new skills to their existing job. These are beginner or beginner-to-advanced courses, but Udemy offers a wide variety of jumping-on points for all skill levels.

I chose these courses based on their potential to result in practical applications at work, user reviews and popularity based on Udemy’s Bestselling leaderboard.

  • The Complete Python Developer: Complete Python Developer
  • Microsoft Excel — Excel from Beginner to Advanced: Excel — Excel from Beginner to Advanced
  • The Guide to AI & Prompt Engineering: ChatGPT Masterclass
  • The Complete 2024 Web Development Bootcamp: Complete 2024 Web Development Bootcamp
  • Complete Data Science Bootcamp 2024: The Data Science Course

Best Udemy courses: Comparison table

Course Cost Duration Skill level Certification upon completion?
The Complete Python Developer $199.99 30.5 hours Beginner Yes
Microsoft Excel — Excel from Beginner to Advanced $124.99 21 hours Beginner Yes
ChatGPT Masterclass: The Guide to AI & Prompt Engineering $124.99 16 hours Beginner Yes
The Complete 2024 Web Development Bootcamp $109.99 61 hours Beginner Yes
The Data Science Course: Complete Data Science Bootcamp 2024 $119.99 31 hours Beginner Yes

The Complete Python Developer

The Complete Python Developer course is made up of a series of on-demand videos.
The Complete Python Developer course is made up of a series of on-demand videos. Image: Udemy/Screenshot by TechRepublic

With Python consistently ranking as one of the most popular programming languages, using it could open up a lot of developer career options. Udemy’s Complete Python Developer course is about 31 hours, including a hands-on coding exercise and downloadable resources. This comprehensive course teaches you how to use Python in web development, machine learning, automation and more. You don’t need prior experience in Python to take it.

Cost

The Complete Python Developer course costs $199.99.

The Complete Python Developer

Microsoft Excel — Excel from Beginner to Advanced

Microsoft Excel - Excel from Beginner to Advanced is one of Udemy’s best-selling courses.
Microsoft Excel – Excel from Beginner to Advanced is one of Udemy’s best-selling courses. Image: Udemy/Screenshot by TechRepublic

Mastering Excel enables you to create dynamic reports with PivotTables, automate some work tasks, run dynamic formulas in VLOOKUP and more. Knowing how to take advantage of Excel beyond just filling in fields can make your job more efficient and secure your place in the organization as the Excel wizard. Udemy’s 21-hour course includes downloadable resources for you to keep after the curriculum is done.

Learners who took Microsoft Excel – Excel from Beginner to Advanced praise the instructor, Kyle Pew, for his engaging teaching style. Some students noted in reviews that, while the course is listed as a beginner class, the sections involving the Microsoft programming language VBA might best serve people who already have a background in VBA.

Learners who take this course are recommended to have Microsoft Excel 2007, 2010, 2013, 2013, 2019 or Microsoft 365 Excel on their computer in order to work along with the material shown in the course.

Cost

The Microsoft Excel – Excel from Beginner to Advanced course costs $129.99.

Microsoft Excel — Excel from Beginner to Advanced

ChatGPT Masterclass: The Guide to AI & Prompt Engineering

ChatGPT Masterclass: The Guide to AI & Prompt Engineering is a 16-hour pre-recorded course with live Q&A available.
ChatGPT Masterclass: The Guide to AI & Prompt Engineering is a 16-hour pre-recorded course with live Q&A available. Image: Udemy/Screenshot by TechRepublic

How to get the most out of ChatGPT continues to be a hot topic. Organizations across fields are exploring which generative AI product is right for them and how generative AI can change the way they work.

This course is a thorough exploration of how to use prompt engineering to make ChatGPT produce the right answers for you. The course explains how to use ChatGPT to help build websites in WordPress, write resumes and generate content.

Some reviews pointed out the presenters’ opinions can find their way into the course and that the website building section is longer than the other sections.

Cost

The ChatGPT Masterclass: The Guide to AI & Prompt Engineering course costs $124.99.

ChatGPT Masterclass: The Guide to AI & Prompt Engineering

The Complete 2024 Web Development Bootcamp

The Complete 2024 Web Development Bootcamp is taught by Dr. Angela Yu.
The Complete 2024 Web Development Bootcamp is taught by Dr. Angela Yu. Image: Udemy/Screenshot by TechRepublic

This thorough course covers everything from the very basics (“How does the internet actually work?”) to the details of advanced CSS, JavaScript, APIs and more. Graduates of this course will have completed building websites, which they can show to potential employers. The last 12 hours or so of the course are devoted to Web3 and cryptocurrency, which may not be relevant to a wider web development audience. Participants praised the instructor, Dr. Angela Yu, and the visual aids used throughout the course.

Cost

The Complete 2024 Web Development Bootcamp costs $109.99.

The Complete 2024 Web Development Bootcamp

The Data Science Course: Complete Data Science Bootcamp 2024

The Complete Data Science Bootcamp contains 31 hours of video and hands-on exercises.
The Complete Data Science Bootcamp contains 31 hours of video and hands-on exercises. Image: Udemy/Screenshot by TechRepublic

The Data Science Course: Complete Data Science Bootcamp 2024 includes hands-on exercises and teaches you how to pre-process data, plus the basics and details of machine learning, deep learning, statistical analysis, Python and NumPy and neural networks.

Reviewers praised the instructor for the course’s thoroughness; however, some reviewers noted the evaluation tool in the hands-on coding sections does not work well.

Data science covers a wide swath of topics, making it a good branching-off point to other topics you might want to explore, such as statistical techniques, deep learning and regression analysis. This course teaches calculus and linear algebra as they apply to programming in data science, providing a solid foundation for a future career in machine learning.

Cost

Data Science Course: Complete Data Science Bootcamp 2024 costs $119.99.

The Data Science Course: Complete Data Science Bootcamp 2024

Free Courses on Udemy

Udemy sometimes offers brief tutorials for free, such as the Python lessons at the bottom of this page. Some other free tutorials — which are excerpts from paid courses — include:

  • Introduction to Artificial Intelligence in Software Testing
  • What is Normal Distribution?
  • What is Ethical Hacking?

Singapore looks to boost AI with plans for quantum computing and data centers

Singapore buildings

Singapore is looking to carve out a global footprint in artificial intelligence (AI) with the release of international standards for large language model (LLM) testing and investments in quantum computing and new data center capacity.

Quantum has the potential to unlock new value, where higher processing capabilities can be harnessed in areas such as simulating complex molecules for drug discovery, said Deputy Prime Minister Heng Swee Keat at last week's Asia Tech x Singapore 2024 summit.

Also: Generative AI may be creating more work than it saves

He added that quantum computing can also have synergies with AI, for example, in improving the efficiency of developing and training advanced AI models. This development, in turn, can further drive innovations in deep learning, natural language processing, and computer vision.

However, there still are challenges to resolve in quantum, including requirements for cryogenic cooling and error correction, Heng said. He noted that researchers worldwide were assessing different approaches to achieve scale and enable quantum computing to be commercially viable.

Also: Rote automation is so last year: AI pushes more intelligence into software development

Singapore wants to address these challenges with its National Quantum Strategy, coupled with almost SG$300 million ($221.99 million) in investment. This cash is on top of a previous SG$96.6 million commitment announced in 2022. The new investment is earmarked for five years, through to 2030, to boost the country's position as a leading hub in the development and deployment of quantum technologies, Heng said.

This roadmap focuses on four areas, including initiatives in quantum research, such as quantum communications and security and quantum processors, and a scholarship program to produce 100 PhD and 100 master's-level graduates over the next five years, he said.

Efforts are underway for Singapore to build capabilities in the design and development of quantum processors. This work will encompass research on qubit technologies, including photonic networks, neutral atoms, and superconducting circuits.

ZDNET understands Singapore's target is to have the first prototype ready in the next three years and scale out production in five years.

The government in 2022 unveiled a three-year initiative to build a quantum-safe network that it hopes will showcase "crypto-agile connectivity" and facilitate trials with both public and private organizations. The initiative also includes a quantum security lab for vulnerability research.

Laying the ground for green data centers

Singapore last week also launched its green data center roadmap to chart "digital sustainability and chart green growth pathways" for such facilities, supporting AI and computing developments.

The country has over 1.4 gigawatts of data center capacity and is home to more than 70 cloud, enterprise, and co-location data centers.

Singapore is aiming to add at least 300 megawatts of additional data center capacity "in the near term" and another 200 megawatts through green energy deployments, said Janil Puthucheary, senior minister of state for the Ministry of Communications and Information, at the summit.

Efforts will be made to enhance efficiency through both hardware and software, Puthucheary said, pointing to technologies that maximize energy efficiency and capacity, and green software tools.

He added that improving data center efficiency is also about greening software, so the carbon emissions of applications can be reduced.

He said the focus will be placed on data centers to accelerate their use of green energy, with the government offering support via grants and incentives to switch to energy-efficient IT equipment. In addition, the Infocomm Media Development Authority (IMDA) will work with PUB to help data centers push their water usage effectiveness (WUE) to 2.0 cubic meters or less per megawatt hour, up from the 2021 median WUE of 2.2 cubic meters.

Also: Agile development can unlock the power of generative AI — here's how

IMDA will jointly develop standards and certifications with industry partners to drive the development and operation of data centers with power usage effectiveness (PUE) of 1.3 or lower.

In addition, the BCA-IMDA Green Mark for data centers will be refreshed by year-end to raise the standards for energy efficiency in data centers. IMDA will also introduce standards for IT equipment energy efficiency and liquid cooling by 2025, to drive the adoption of these technologies in Singapore.

The green data center roadmap outlines plans to reduce energy use for air-cooling by raising operating temperatures via IMDA's tropical DC methodology.

According to the government agency, data centers can achieve 2% to 5% energy savings for every 1°C increase in operating temperature.

It also pointed to simulations that have found existing data centers can achieve a 50% reduction in energy consumption of supporting infrastructure, with energy-efficient retrofits and upgrades for key equipment, such as chiller plants and uninterruptible power supplies.

"We aim to uplift all data centers in Singapore to achieve PUE of less than 1.3 at 100% IT load over the next 10 years," IMDA said. "This gives existing data centers sufficient time to plan for upgrades."

The tech industry today emits an estimated 1.5% to 4% of global greenhouse gas emissions, Heng noted, with this figure projected to climb as the use of AI expands alongside the need for data storage and processing.

Also: 3 ways to accelerate generative AI implementation and optimization

He said technologies that drive the country's digital economy, such as cloud and AI, fuel demand for powerful and energy-intensive computing.

"Data centers lie at the heart of such activities and require large amounts of energy for processing and cooling. Greening ICT, especially data centers, is therefore crucial in a digital and carbon-constrained world," he said.

"There is a need to balance the economic and social benefits of digital applications with the environmental effects from the resultant emissions," he said, noting that Singapore has committed to a net-zero target by 2050.

"The [green data center] roadmap sets out low-carbon energy sources that data centers can explore, which include bioenergy, fuel cells with carbon capture, low-carbon hydrogen and ammonia for a start," Puthucheary explained. "We welcome proposals from the industry to push boundaries in realizing these pathways in Singapore."

Charting global test standards for AI models

Meanwhile, the country wants to lead the way by releasing standards for large language model (LLM) testing, developed via partnerships with global organizations such as MLCommons, IBM, and Singtel.

Dubbed Project Moonshot, the LLM testing tool provides benchmarking, red-teaming, and testing baselines to help developers and organizations mitigate risks associated with LLM deployment.

Also: Generative AI is the technology that IT feels most pressure to exploit

LLMs without guardrails can reinforce biases and create harmful content, with unintended consequences. "IMDA is seeking to establish guardrails to manage the risks while enabling space for innovation," the government agency said.

"It is important to adopt an agile, test-and-iterate approach to address key risks in model development and use. Project Moonshot provides intuitive results, so testing unveils the quality and safety of a model or application in an easily understood manner, even for a non-technical user."

The testing tool provides a five-tier scoring system where each completed scoring sheet will place the application on a scale. Grade cut-offs can be determined by the author of each of these scoring sheets.

AI Verify Foundation and MLCommons jointly developed the testing LLM benchmarks. The latter is an open-engineering consortium supported by Qualcomm, Google, Intel, and NVIDIA and recognized by the US National Institute of Science and Technology under its AI Safety Consortium. AI Verify Foundation is Singapore's not-for-profit foundation that focuses on developing AI testing tools.

Also: AI business is booming: ChatGPT Enterprise now boasts 600,000+ users

Project Moonshot is currently available as an open beta.

IMDA said it is working with companies such as Anthropic to develop a practical guide to multilingual and multicultural red-teaming for LLMs. The guide is slated for release later this year for global use.

Artificial Intelligence

Empowering Educators with AI Literacy

Jill Kowalchuk’s Insights on Integrating AI in K-12 Education

Brought to you by DOCEO AI | Hosted by the AI Think Tank Podcast on May 27th 2024

Empowering Educators with AI Literacy

The AI Think Tank Podcast has been given permission to share this video in an effort to help teachers around the world gain a better understanding of the use of AI in education. Special Thanks to Jill Kowalchuk at amii and Ahmad Jawad at DOCEO AI. Contact us for complete transcript

Introduction

The rapidly evolving landscape of artificial intelligence (AI) presents both opportunities and challenges for educators. Jill Kowalchuk, a K-12 education advisor at the Alberta Machine Intelligence Institute (AMII), recently delivered an enlightening webinar discussing the impact of AI on education. Her presentation emphasized the importance of AI literacy, ethical considerations, and the empowerment of teachers through professional development. This article summarizes her key points, offering insights into the integration of AI in K-12 education.

Jill Kowalchuk’s Background and Role at AMII

Jill Kowalchuk’s journey began as a junior high social studies teacher. With a master’s degree in education focusing on digital literacy and currently pursuing a PhD in AI ethics, Jill has dedicated her career to understanding and promoting AI in education. At AMII, she leads the AI in K-12 program, aiming to enhance AI literacy among teachers to better support their students.

AMII and the AI in K-12 Program

The Alberta Machine Intelligence Institute (AMII) is part of the Pan-Canadian AI Strategy, collaborating with Mila in Montreal and Vector in Toronto to advance AI research and education. AMII’s motto, “inspiring world-changing machine intelligence for good and for all,” highlights its commitment to ethical AI. Jill’s AI in K-12 program focuses on empowering teachers with AI literacy, enabling them to integrate AI into their classrooms effectively.

Pilot Phase and Resource Development

Jill and her team at AMII began by conducting a pilot phase with high school teachers in the Edmonton area. They engaged teachers from three different school authorities to identify their questions and strategies related to AI and technology. This collaborative approach ensured that the program was tailored to teachers’ needs and realities.

The pilot phase led to the creation of digital learning kits, developed with the help of master’s and PhD students from the University of Alberta’s computing science department. These kits include lesson plans, assessment materials, and videos, all accessible online for free. Teachers piloted these resources during the 2023-2024 school year, providing feedback to refine and improve them.

Scaling Up and Expanding to Elementary Education

Following the successful high school pilot, AMII expanded its efforts to elementary education. The program now includes professional development workshops and resources tailored for elementary teachers, with the goal of reaching a broader range of educators and students.

Professional Development and AI Literacy Workshops

Jill’s professional development workshops, titled “AI Explorations for Educators,” cater to teachers with varying levels of AI literacy. These workshops aim to:

  1. Provide a basic understanding of AI and its applications in education.
  2. Raise important ethical questions about AI in education.
  3. Analyze ethical issues through case studies.
Empowering Educators with AI Literacy

Key Concepts in AI

Jill’s workshops emphasize the importance of understanding key AI concepts, including artificial intelligence, machine learning, and deep learning. She explains that AI is an umbrella term for technologies that enable computers to mimic human behavior. Machine learning, a subset of AI, allows machines to learn from data, while deep learning involves complex algorithms that can independently learn from vast datasets.

Empowering Educators with AI Literacy

Ethical Considerations and AI Governance

A significant part of Jill’s presentation focused on the ethical considerations of AI in education. She highlighted the importance of AI governance, which involves creating policies and frameworks to guide the ethical use of AI. Fairness in AI systems, ensuring that they do not discriminate against specific groups, is another crucial aspect.

AI Literacy and Digital Literacy

AI literacy, according to Jill, stems from digital literacy. It involves the ability to critically evaluate AI technologies, communicate effectively with AI systems, and use AI tools at home, in school, and at work. Jill advocates for viewing AI not merely as a tool but as a collaborative partner that can enhance teaching and learning.

Empowering Educators with AI Literacy

Challenges and Solutions

Jill acknowledged the challenges of introducing AI to teachers with varying levels of interest and understanding. She emphasized the importance of building relationships and providing ongoing support to help educators navigate the complexities of AI. By focusing on overall concepts and ethical questions, rather than just specific tools, Jill aims to foster a deeper understanding and more meaningful integration of AI in education.

Empowering Educators with AI Literacy

Next Steps and Future Directions

AMII continues to expand its AI in K-12 program, with plans to further develop resources for elementary educators and explore new collaborations. Jill and her team are committed to providing accessible AI literacy opportunities, including courses on AI ethics and governance, to support educators and other professionals.

Empowering Educators with AI Literacy

Conclusion

Jill Kowalchuk’s work at AMII underscores the importance of AI literacy in education. By empowering teachers with the knowledge and tools to integrate AI effectively, she is helping to shape a future where AI enhances, rather than disrupts, the learning experience. As AI continues to evolve, ongoing education and ethical considerations will be crucial to ensuring that it benefits all students and educators.

AI IN K-12 EDUCATION LinkedIn Group

Jill Kowalchuk’s LinkedIn Page

Ahmad Jawad’s LinkedIn Page

More political deepfakes exist than you think, according to this AI expert

truemedia-sample-deepfake-detection.png

TrueMedia evaluating a piece of content.

How prevalent are political deepfakes? Most relatively informed citizens can recall major instances of synthetic political content, such as the apparent "robocall" made by President Joe Biden in January to voters in New Hampshire, which turned out to be a synthesized voice created by AI. While there is no authoritative statistic on artificial intelligence (AI) deepfakes, a skeptic might think they're not common, given that only a high-profile few are widely reported on.

But, according to one AI scholar, it's more likely that AI deepfakes are on the rise in advance of the US presidential election in November — you just don't see many of them.

Also: 80% of people think deepfakes will impact elections. Here's how to prepare

"I would take an even-odds bet of a thousand dollars that we are going to see an unprecedented set of these [deepfakes]" come November, "because it's become so much easier to make them," said Oren Etzioni, founder of the non-profit organization TrueMedia, in an interview with ZDNET last month.

"I would not encourage you to take that bet, because I have more information than you do," he continued with a laugh.

TrueMedia runs servers that assemble multiple AI-based classifier programs for the sole purpose of telling whether an image is a deepfake or not. The organization is backed by Uber co-founder Garrett Camp's charitable foundation, Camp.org.

When a user uploads an image, TrueMedia will produce a label that says either "uncertain," in a yellow bubble, "highly suspicious," in red, or a green bubble with "authentic" if the AI models have a degree of certainty it's not a deepfake. You can see a demo and sign up for beta access to the program here.

Etzioni, a professor at the University of Washington, is also founding chief executive of the Allen Institute for AI, which has done extensive work on detecting AI-generated material.

"We see trial runs, we see trial balloons, we see people setting things up" to produce many more deepfakes as elections arrive later this year, and not just in the US, says Etzioni.

Etzioni thinks it's correct to say that there "isn't an enormous amount" of deepfakes circulating publicly at the moment. However, he added that much of what's actually out there probably goes unnoticed by the general public.
"Do you really know what's happening in Telegram?" he pointed out, referring to the private messaging service.

Also: As AI agents spread, so do the risks, scholars say

The founder said TrueMedia is seeing evidence that deepfake creators are ramping up production for later this year, when election season intensifies. "We see trial runs, we see trial balloons, we see people setting things up," he noted — and not just in the US.

"This is the year that counts, because we are coming up against a series of elections,and the technology [of deepfakes] has gotten so prevalent," he explained. "To me, it's a matter of when, not if, there are attempts to disrupt elections, whether at the national level or at a particular polling station."

To prepare, TrueMedia has built a web of capabilities and infrastructure. The organization runs its own algorithms on potential deepfakes while paying collaborating startups such as Reality Defender and Sensity to run their algorithms in parallel, to pool efforts and cross-check findings.

"It really requires a grassroots effort to fight this tsunami of disinformation that we're seeing the beginnings of," Etzioni said. "There's no silver bullet, which means that there's no single vendor or a model that gets it all."

Also: What are Content Credentials?

To start, TrueMedia tunes a variety of open-source models. "We run classifiers that say yes or no" to each potential deepfake, Etzioni said. Run as an ensemble, the classifiers can pool answers from each model — and the team is seeing over 90% accuracy.

"Some of these classifiers have generative models embedded in them, but it's not the case that we're just running Transformers," Etzioni continued, referring to the ubiquitous "attention" model of AI that gave way to several others, including GPT-4, Google's Gemini, Anthropic's Claude, and Meta's Llama.

TrueMedia is not, for the moment, offering source code for the models, nor publishing technical details or disclosing training data sets. "We are being circumspect about sources and methods, at least right now," said Etzioni. "We're a nonprofit, so we're not attempting to create anything proprietary — the only thing is we're in an unusual position because we are in an adversarial landscape."

Etzioni expects further disclosure can happen in time. "We just need to figure out the appropriate structures," he said.

Also: Google's VLOGGER AI model can generate video avatars from images — what could go wrong?

To support running all these models, TrueMedia has enlisted the help of startup OctoAI, which was founded by Etzioni's friend and colleague at the University of Washington, Luis Ceze.
OctoAI, which cut its teeth improving AI performance across diverse computer chips and systems, runs a cloud service to smooth the work of training models and serving up predictions. Developers who want to run LLMs and the like can upload their model to the service, and OctoAI takes care of the rest.

TrueMedia's inference needs are "pretty complex, in the sense that we're both accessing vendor APIs, but also running many of our own models, tuning them," said Etzioni. "And we have to worry a lot about security because this is a place where you can be targeted by denial-of-service attacks."

Also: Serving generative AI just got a lot easier with OctoAI

"As we get closer to the elections, we expect the volume to be pretty high" for performing queries against the TrueMedia models, says OctoAI founder Luis Ceze. "We want people to not have to wait for too long or maybe lose patience."

TrueMedia and its collaborators are expecting a rising tide of deepfake queries. "Especially as we get closer to the elections, we expect the volume to be pretty high" for performing queries against the models, Ceze said in an interview with ZDNET.

The coming increase means speed and scale are a concern. "We, as a society in general, want people using it, and want the media to use it and validate its images," Ceze added. "We want people to not have to wait for too long or maybe lose patience."

"The last thing we want is to crumble under a denial-of-service attack, or to crumble because we didn't set up auto-scaling properly," said Etzioni. According to him, TrueMedia already has thousands of users. He anticipates that, as the year rolls on, "we will have several of the leading media organizations worldwide using our tools, and we'll easily have tens of thousands of users."

So how will TrueMedia know if it is having an impact?

"Will we prevent the election from being swayed?" Etzioni mused. "You know, you can use fancy words like 'protect the integrity of the election process'; that's too grandiose for me. I just want to have that tool be available."

Also: All eyes on cyberdefense as elections enter the generative AI era

For Etzioni, the goal is to create transparency — and conversation — around deepfakes. TrueMedia's content labels are public, so viewers can share, compare, and contest their findings. That extra check is important: even with more than 90% accuracy, Etzioni admits that TrueMedia's ensemble isn't perfect. "The way ChatGPT can't avoid hallucinations, we can't avoid making errors," he said.

Skeptics will question the kind of influence wielded by a non-profit with no disclosed source code, whose training data sets are not open to public scrutiny. Why should anyone trust the labels that TrueMedia is generating?

"I think that's a fair question," said Etzioni. "We're only six weeks old, we have more disclosure to do."

That said, Etzioni emphasized the openness of TrueMedia's approach, as compared to other options. "We are an open book in terms of results, unlike a lot of the other tools out there that are available under the hood or for sale," he said.

TrueMedia expects to publish an analysis of the current state of deepfakes in the coming weeks. "We're getting people to think more critically about what they see in media," he said.

Artificial Intelligence

Making AI pay off at the enterprise edge

An edited interview with Devin Yaung, Senior Vice President, Group Enterprise IoT Products and Services at NTT Ltd.

Making AI pay off at the enterprise edge

Photo from Wikimedia Commons

Devin Yaung is global lead of IoT for NTT Ltd. Yaung’s consulting background included stints at Accenture and PwC. He’s been at NTT for almost three years now.

Making AI pay off at the enterprise edge

In this interview, Yaung and I discussed the practical approach that NTT takes with clients from a system integrator’s perspective. This interview took place after Upgrade 2024, an annual event that NTT hosted in San Francisco.

Background on NTT. Large conglomerate, Japanese phone company. DoCoMo East, NTT West focus on the domestic market. Everything outside of Japan–the global business of NTT–is now called NTT Data Inc.

NTT Data Inc is the traditional NTT Limited–data centers, submarine cables,network, cloud and edge business–combined with the systems integration and consulting capabilities of NTT Data to give clients the one-stop shop for the full stack.

The big picture on edge computing activities. The battle for the AI and machine learning future is going to be won at the edge.

There’s a lot of buzz around AI, for good reasons, machine learning, all these analytical platforms, but the issue is that it’s garbage in/garbage out as to what you’re using to train as input to these models.

A lot of these AI things are calculators–for now. And your calculators are only as good as your input. The data that is feeding these AI models, your ERP model, your analytics models is going to be the most important. When you’re training your model on imperfect data, you’re not going to have the best results.

The AI timeline and current challenges. AI is still in its infancy. What’s really changing things is the cost to do things. Years ago, the compute power wasn’t there, and the cost to train a model was extremely high.

Then you got more open source, and you got better compute, so that now it’s accessible to most enterprises, to you and I.

But the problem is that we’re running out of data to feed these models. For example, say I’ve invested a lot in enterprise resource planning (ERP) systems. So now I’m going to plan how to schedule my resources such as my work shifts, my raw materials, etc.

If I have no visibility into what my demand and inventory is, where my workers and raw materials are, it’s very hard to plan my resources. I don’t know what my customer demand is. I’m going to be very reactive every time there’s an order. I may need to put in another shift.

But if I knew how inventory was flowing, and I could use AI and all these things to model it out, I could start forecasting demand. I determine the need two shifts, for example, and I can plan months ahead, and I need these raw materials, and I can be much more efficient. Having real time, trusted data can help on the back end for large investments in what we’re doing in AI and ML.

Thoughts on supply chain collaboration. If you think about every aspect of using data and AI to predict where things are going to be, technology is one part of it. There are platforms. There’s the edge that collects the data to feed these platforms, that then makes the recommendations.

But the hardest thing is that once you have that recommendation, you have to take action upon it. There’s all the systems integration and the integration into the backend ERP system to say, “These are the actions that need to be taken and those who are accountable to take that action.

Predictive maintenance example. Let me give you an example. Let’s take your car. I need an oil and filter change, or I need to change a tire. Unless you take action upon it, to say, “I’m going to actually do that,” all these alerts and notifications mean nothing.

So in an enterprise setting, when assigning the work, creating a trouble ticket, ordering the parts and then monitoring that the workflows have been kicked off, and seeing that all the work has been done satisfactorily–that’s when the data is actionable.

Most clients say, “I’m inundated with data. It’s like a nagging person, and I ignore 80 percent of the alerts that I do get because they’re not actionable.” Or, “I don’t trust that data. I’m going to go and verify.”

Not just data, but meaningful process change. As a core network company, you go back to the old days of FCAPS (an acronym for the five working levels of network management: fault, configuration, accounting, performance and security. See https://www.techtarget.com/searchnetworking/definition/FCAPS for more information.) The focus was correlating those alerts and looking for the root cause analysis. What does all this data mean?

What is the cause of the events and the alerts that I’m getting? From a data perspective, (1) collecting the data, to (2) making sense of it, (3) making sure that it’s trusted, and (4) making it actionable are the key four things.

But that’s just the technology viewpoint. What about the people involved? Until you can change the operating model, incentivizing the people to take action on it through a process change, nothing gets done.

Let’s say a water leak has sprung in the basement of a building. For the person dispatching a technician to fix the leak, their metrics are based on reducing overtime. “Wow, it’s Sunday night. I’m going to get in trouble because it’s going to be time and a half to come in on Sunday. I’m going to wait until Monday, when my metrics that I’m measured on–saving money on operations–is what matters.” And the whole time water is filling up the basement.

Are you tying the metrics of the people to the outcome?

Regulatory issues. One of the things that has been challenging is the whole regulatory security and governance environment. And regulation is one of the biggest things because it’s all about moving data. If it’s here in the United States, every state has its own requirements.

For instance, California and Illinois have very strong Health Insurance Portability and Accountability Act (HIPAA) laws. California has strong data privacy requirements, for the most part. We tend to start with GDPR as a baseline, and Europe has recently passed its AI Act. Are provisions like those going to be adopted in the States?

Companies who are thinking about using AI are saying, “I could use it today, but where’s the regulatory environment going to be? A year from now I may find out I’m in violation or another, and this may get me in trouble. That aspect is one of the biggest barriers to adoption.

Companies have to look at these kinds of things in all their dimensions, rather than just in a silo. If all you have is a point solution, that’s not very effective.

What transformation clients really want. Everyone has an edict–for example, digital transformation. Everyone is saying, “Go do something in AI because we don’t want to fall behind.” But because a lot of these things aren’t proven, clients tell us, “Prove this is going to work.” And once they succeed, then they can start adding on. It starts with a small pilot, and then it moves on.

Let’s take the example of predictive maintenance on exhaust fans in a building. There can be hundreds of exhaust fans at places like hospitals. The master mechanic at such a facility has to spend three hours a day walking the roof doing inspections–not only the fans but everything else they’re concerned with. You could put a vibration sensor on each fan to tell you when a bearing is broken, or a fan blade.

You could sensor it up, but then the client’s question is, “Will my master mechanic trust that dashboard to say that all exhaust fans are working fine, or is he going to say, ‘Maybe those sensors are wrong. I do need to walk up and inspect the fans every time.’ But then I’m not saving any money because I’m still hiring a third party to do all this preventive maintenance.”

The client says,”I need those responsible to get addicted to the data. I want them to trust it to the point where they can go do their regular work without having to do the visual inspection too.”

Logistics warehouse example. Another example involves working in a logistics warehouse. The pickers in the warehouse who were picking products were making errors because the products tend to look the same. We trained computer vision to validate that each order is correct so that operations can proceed.

Additionally, we were able to bring in our supply chain team and say, “Let’s redesign the whole pick process to reduce the multiple points of failure, and then use the data to validate that effort.” The client really liked that we were able, not only to bring in the technology, but to make people comfortable with the technology. We gave them reassurance, saying “This is not spying on you. This is an audit trail that makes things more efficient, so that we don’t have as much waste.”

This same company had piloted the use of robotics in the picking process, but no one had thought to coordinate the robotics with the work schedule, so that the robot had a human to hand the picked product off to. Obviously, they fixed that issue.

Sometimes we get so enamored with technology and we think it’s turnkey. And we forget the human aspect of it. Sometimes when watching what’s happening in a factory, I find myself wondering, “Is this the best use of someone’s time?” For example, watching a person take parts off a stamp and then hang them on a hanger for painting. Repeatedly. What’s going through that person’s head? Automation would free them up to do something more value-added.

Or hearing from someone whose job it is to pore over utility bills, when AI could analyze the bills and free the staff up to do something more strategic.

For now, humans will not be taken out of the equation.

Taking a more holistic, practical approach to transformation. We have to take a look at the full, end-to-end picture, what it is that we want to accomplish, and why we’re doing it. There has to be an ROI, or a business case. A tire company approached us and said, “We want to do tire as a service. We would love to be able to predict the tire wear.”

Someone had come up with a system to predict tire wear on trucks. But it cost $100,000, which was just a non-starter for tires on a truck.

We’ve had conversations about IoT-connected flip flops… This sort of conversation underscores that, just because you can do it, doesn’t mean it makes fiscal sense.

AMD Reveals Ryzen 9000 CPUs and AI PC Architecture at Computex 2024

AMD has a broad portfolio of GPUs, CPUs, architectures and networking products and recently joined a standard-building group intended to create standards for AI chip technology in opposition to rival NVIDIA. Its latest hardware, announced at Computex in Taipei on June 2, brings improved AI performance with products including the Ryzen 9000 series and the Ryzen AI 300 chip, which is a repackaging of the Ryzen 9 chips.

AMD’s next desktop processors are the Ryzen 9000 series

The Zen 5 architecture and XDNA 2 NPU power the Ryzen 9000 series, which can be leveraged in desktops for productivity, content creation or gaming. The Ryzen 9000 series desktop processors will be available in July 2024.

Infographic showing the Ryzen 9000 series desktop processor line shows a variety of size and performance options.
The Ryzen 9000 series desktop processor line shows a variety of size and performance options. Image: AMD

The series consists of Ryzen 9 9950X, 9900X, 9700X and 9600X processors.

The Ryzen 9 9950X enhances performance on heavy-duty programs like Blender or Adobe tools with AI features and decreases latency in gaming; AMD claims it outperforms the Intel Core i9. Decreased latency comes in handy for AI in particular. Some models in this series show a marked decrease in heat output, with just 65W thermal design power in the Ryzen 7 9700 X and Ryzen 5 9600 X.

AMD hardware will power AI PCs

AMD hardware will appear in a wide variety of upcoming laptops from partners including Microsoft, HP, Lenovo and Asus.

“They’re going to get to know you,” said AMD’s Donny Woligroski, senior technical marketing manager of consumer processors, in a press briefing, referring to AI PCs. “They’re going to deliver new levels of intelligent, really personalized PC experiences. And we think it’s (AI PCs are) an inflection point that is really going to drive demand and PC consumption in the coming years.”

AI will enable real-time audio translation, transcription and generation on-device, providing centralized, personalized capabilities. New in this space is the AMD Ryzen AI 300 Series, built on AMD XDNA 2 and Zen 5 CPU cores, with AMD RDNA 3.5 graphics, for AI PCs.

The 3rd Gen AMD Ryzen 300 Series will be in some laptops from Microsoft, HP, Lenovo and Asus.
The 3rd Gen AMD Ryzen 300 Series will be in some laptops from Microsoft, HP, Lenovo and Asus. Image: AMD

AMD’s XDNA 2 provides an impressive 50 TOPS of compute power, which is basically a measurement of how fast AI inference can be performed in trillions of operations. Zen 5 brings up to 12 cores, which is a lot for a slim laptop. Compare these to Qualcomm Snapdragon X Elite at 45 TOPS and Apple M4 at 38 TOPS.

Intel’s upcoming Lunar Lake could prove to be another competitor to the XDNA 2 architecture, but it has not yet been revealed in detail.

SEE: Gartner predicts AI chip revenue will continue to rise in 2024.

“This is an incredibly exciting time for AMD as the rapid and accelerating adoption of AI is driving increased demand for our high-performance computing platforms,” said Dr. Lisa Su, AMD chair and CEO, in a press release.

Upcoming Instinct and EPYC products

At Computex, AMD also announced a new roadmap for the release of AMD Instinct MI325X accelerators, which are now expected to ship in the fourth quarter of 2024.

Secondly, the 5th Gen AMD EPYC processors for telecommunications and networking, expected to ship in the second half of 2024, will use the next-generation Zen 5 CPU core.

XDNA NPU

The XDNA 2 NPU architecture provides five times the performance capability of AMD’s first NPU and two times the power efficiency when running generative AI workloads. Since generative AI workloads may be running continuously behind the scenes, that power efficiency may be key to consistent performance.

Zen 5 architecture enhances the XDNA 2 NPU

AMD said the Zen 5 architecture provides:

  • Improved branch prediction.
  • Improved accuracy and latency.
  • Higher throughput, with wider pipelines and vectors.
  • Deeper window size across the designs made with it.

Zen 5’s instruction bandwidth, data bandwidth and AI performance are double that of the previous generation, AMD said.

The Zen 5 is not an incremental generational jump, said Woligroski. “Zen 5 is a sweeping update,” Woligroski said during a prebriefing for the press.

Two new chipsets are compatible with the Ryzen 9000 processors

AMD revealed two new chipsets at Computex: the AMD Socket AM5 X870 and Socket AM5 X870E. These chipsets work with Ryzen processors from 7000 to 9000. These new motherboards come standard with USB 4.0, a significant upgrade, and with PCIe Gen 5.

These chipsets offer higher EXPO overclock support for performance tuning and could provide a new option for upgrading for people who don’t want to jump to AMD’s long-standing AM5 socket.