Data Engineering Awards 2024: Meet the Winners

The Data Engineering Summit 2024 culminated with the highly anticipated Data Engineering Awards ceremony, celebrating the outstanding achievements of individuals and organizations in the field of data science and engineering. The awards recognized exceptional contributions, innovation, and leadership, highlighting the transformative power of data-driven work.

All submissions were assessed by our esteemed panel of industry veterans, including: Sol Rashidi – AI & Data Strategic Advisor at imaware, Giorgio Suighi – Global MD, Data Strategy & Global Lead for Identity Resolution at Mindshare, Amberle Carter – Chief Data & Analytics Officer at Texas Department of Family and Protective Services, Yogananda Domlur Seetharama – Director of Data Science at Walmart Global Tech & Joe Kleinhenz – VP – Data Science, Auto Line at Allstate.

Data Engineer of the Year:

This category honored four exceptional data engineers for their groundbreaking contributions:

Mitesh Mangaonkar, Lead Data Engineer at Airbnb Mitesh spearheads transformative generative AI applications at Airbnb, architecting innovative data pipelines using advanced technologies to fuel trust and safety initiatives.

Saurabh Pramanick, Head of Data Governance at Bank Muscat Saurabh leads Bank Muscat’s data-centric initiatives, addressing data challenges and driving AI initiatives to establish responsible AI practices.

Pan Singh Dhoni, Engineering Manager & Solution Architect, Enterprise Data Platform at Five Below Pan is an International Award winner with extensive IT experience, leveraging cloud computing, big data, and AI to drive innovation.

Data Engineering Company of the Year:

This category recognized companies that have excelled in data engineering:

DBS Technology Services India Pvt Ltd DBS Tech India empowers global businesses to solve pressing financial and banking challenges with unmatched passion and talent.

Indegene Indegene is a digital-first life sciences commercialization company, helping biopharmaceutical and medical device companies develop products and grow their impact.

Polestar Solutions Polestar Solutions helps customers derive sophisticated insights from their data, offering a comprehensive range of analytics services.

Publicis Sapient Publicis Sapient, a digital transformation company, delivers impactful business solutions through its expert SPEED capabilities.

Innovative Data Engineering Project of the Year:

This category honored innovative projects that have made significant impacts:

EXL Service EXL’s Cookiepocalypse initiative addresses the phasing out of third-party cookies and evolving data privacy regulations.

EY EY developed a cloud-native data insights solution for its Assurance service line, improving data accuracy, quality, and accessibility.

MathCo MathCo’s Customer360 project for a leading US retailer enhanced their marketing strategy through personalized content and a modern, cloud-based platform.

Valiance Solutions Valiance Solutions’ Wildlife Eye project implemented a human-animal conflict mitigation system in tiger reserves, enhancing safety for wildlife and local communities.

Leadership in Data Engineering:

This category recognized leaders who have made significant contributions to the field:

Puneet Pandhi, Vice President – Risk and Product Data Strategy at American Express Puneet has nearly 15 years of experience in technology and product domains, successfully executing complex, data-driven projects.

Dr. Santosh Karthikeyan Viswanathan, Technical Director at AstraZeneca Dr. Santosh provides strategic leadership for the Data and Analytics platform in R&D at AstraZeneca and has been recognized as a Prominent Scholar in the World Book of Records, London.

Sandeep Kumar, Executive Director / Partner at Deloitte Sandeep is a trusted data leader with 19+ years of experience, steering data-led transformation projects across various industries.

Yugank Aman, Global Senior Director of Engineering at PepsiCo Yugank has over 16 years of experience driving product innovation and delivering data-driven solutions, leading a team of 300+ engineers across five countries.

Outstanding Data Engineering Team:

This category honored teams that have demonstrated excellence in data engineering:

Merkle Merkle powers the experience economy with a people-centric approach to digital business transformation.

Pluralsight Pluralsight builds better products through online courses and data-driven insights, fostering continuous learning.

Siemens Healthineers Siemens Healthineers enables healthcare providers to enhance precision medicine and patient experience through digital innovation.

Sigmoid Sigmoid excels in data engineering, transforming how organizations harness data to drive strategic objectives.

Tiger Analytics Tiger Analytics helps Fortune 1000 companies solve challenges with full-stack AI and analytics services, driving value at scale.

The Data Engineering Awards 2024 highlighted the remarkable achievements in the data science and engineering landscape. We extend our heartfelt congratulations to all the winners for their exceptional contributions and innovative work. Here’s to pushing the boundaries of data-driven excellence!

The post Data Engineering Awards 2024: Meet the Winners appeared first on AIM.

Revolutionizing Data Management: Microsoft Fabric Meets WhereScape Automation

shutterstock_2255695571 Big data technology and data science. Data scientist querying, analysing and visualizing complex set on virtual screen. Data flow concept. Neural network, artificial intelligence, ML, analytics.

In the rapidly evolving world of data management, the integration of Microsoft Fabric with WhereScape’s automation tools marks a pivotal advancement. This synergy not only redefines the efficiencies of data operations but also empowers organizations to navigate the complexities of digital transformation with unprecedented ease.

Streamlining Migration with Precision

Migrating to advanced systems like Microsoft Fabric can be daunting. WhereScape data warehouse automation simplifies this transition from legacy systems, ensuring a seamless migration that minimizes downtime and maximizes data integrity. By automating the migration process, WhereScape helps businesses quickly adapt to the robust capabilities of Microsoft Fabric, facilitating a smooth transition without the typical complexities involved.

Empowering Data Mesh Architectures

Microsoft Fabric’s decentralized mesh fabric architecture is designed to support dynamic data integration and analysis across diverse domains. WhereScape’s tools enhance this architecture by streamlining the development and maintenance of data products, thereby accelerating the deployment and reducing operational burdens. This integration ensures that data flows effortlessly across the organization, enabling more informed decision-making and strategic agility.

Enhancing Data Transformation and Modeling

At the core of this integration is the ability to efficiently model and transform data within Microsoft Fabric using WhereScape’s low-code interface. This not only speeds up the data handling processes but also significantly reduces the need for extensive coding, allowing teams to focus on strategic initiatives rather than mundane tasks.

Ensuring Compliance through Improved Data Lineage and Documentation

WhereScape’s solutions provide comprehensive documentation and clear visibility into data lineage within Microsoft Fabric environments. This is crucial for maintaining compliance with regulatory requirements and simplifying audit processes. Organizations can now manage their data with greater transparency and accountability.

Beyond OneLake: A Versatile Data Integration Framework

WhereScape extends the capabilities of Microsoft Fabric by facilitating data integration across multiple platforms. Microsoft Fabric is built on OneLake, which presents a unique opportunity, as WhereScape can facilitate the identification and selection of source files as needed. This simplifies the process, so users don't have to move between products and manually enter the source files into WhereScape. This versatility is also beneficial for organizations looking to leverage data beyond the confines of OneLake or Synapse systems, offering a broader spectrum of data management possibilities.

Shutterstock 2344281447AI-Infused Data Management

Microsoft Fabric integrates cutting-edge AI technologies, including conversational AI capabilities through Microsoft Copilot, into every layer of its data platform. WhereScape leverages these AI enhancements to provide more intuitive and responsive data management tools, enabling users to interact with their data in natural language and automate complex data workflows.

A Unified and Simplified Experience

The integration of Microsoft Fabric and WhereScape brings together all necessary tools into a single, unified platform. This amalgamation lowers the barriers to effective data governance, integration, and security, making it easier for teams to manage data workflows and analytics within a unified ecosystem that spans across Microsoft 365 applications.

Deep Dive into Microsoft Fabric's Core Components

Microsoft Fabric is not just a data management tool but a comprehensive SaaS platform integrating multiple Microsoft services into a unified solution. This includes everything from real-time analytics to business intelligence and data integration, all residing under the umbrella of OneLake. OneLake serves as a centralized multi-cloud data lake, enabling seamless data sharing and management across the enterprise.

Advantages of Microsoft Fabric's SaaS Foundation

Microsoft Fabric utilizes a Software as a Service (SaaS) platform, meaning IT teams can easily set up core enterprise features all in one place, and permissions will automatically apply across all the services underneath.

The SaaS foundation of Microsoft Fabric simplifies data integration and management, offering a range of integrated analytics services that ensure a consistent user experience and easy access to assets across Power BI, Azure Synapse, and Azure Data Factory. This not only streamlines operations but also enhances data governance and security, providing a centralized administration across all services.

Looking Forward: Upcoming Integrations and Enhancements

The future of WhereScape and Microsoft Fabric integration looks promising with plans to introduce new functionalities like enhanced data pipeline orchestration and expanded data vault support within the Microsoft Data Fabric environment. These upcoming features are set to further simplify data management processes, enhance scalability, and offer more robust data governance capabilities.

Shutterstock 2284126663Explore Cutting-Edge Integration from WhereScape Today

Leverage the transformative capabilities of Microsoft Fabric and WhereScape’s automation to redefine data management and accelerate digital transformation. This powerful integration simplifies migrations, enhances data mesh architectures, streamlines transformation and modeling processes, and ensures compliance through improved data lineage and documentation. With AI-infused management tools and a unified, simplified user experience,

Microsoft Fabric and WhereScape set the stage for an efficient, agile, and scalable data environment.

Contact us to discover how this dynamic duo can enhance your data strategies and propel your organization toward digital excellence.

18 Free AI Courses by NVIDIA in 2024

Top 18 Free AI Courses by NVIDIA in 2024

NVIDIA is one of the most influential hardware giants in the world. Apart from its much sought-after GPUs, the company also provides free courses to help you understand more about generative AI, GPU, robotics, chips, and more.

Most importantly, all of these are available free of cost and can be completed in less than a day. Let’s take a look at them.

1. Accelerating Data Science Workflows with Zero Code Changes

Efficient data management and analysis are crucial for companies in software, finance, and retail. Traditional CPU-driven workflows are often cumbersome, but GPUs enable faster insights, driving better business decisions.

In this workshop, one will learn to build and execute end-to-end GPU-accelerated data science workflows for rapid data exploration and production deployment. Using RAPIDS™-accelerated libraries, one can apply GPU-accelerated machine learning algorithms, including XGBoost, cuGraph’s single-source shortest path, and cuML’s KNN, DBSCAN, and logistic regression.

More details on the course can be checked here – https://learn.nvidia.com/courses/course-detail?course_id=course-v1:DLI+T-DS-03+V1

2. Generative AI Explained

This self-paced, free online course introduces generative AI fundamentals, which involve creating new content based on different inputs. Through this course, participants will grasp the concepts, applications, challenges, and prospects of generative AI.

Learning objectives include defining generative AI and its functioning, outlining diverse applications, and discussing the associated challenges and opportunities. All you need to participate is a basic understanding of machine learning and deep learning principles.

To learn the course and know more in detail check it out here – https://learn.nvidia.com/courses/course-detail?course_id=course-v1:DLI+S-NP-01+V1

3. Digital Fingerprinting with Morpheus

This one-hour course introduces participants to developing and deploying the NVIDIA digital fingerprinting AI workflow, providing complete data visibility and significantly reducing threat detection time.

Participants will gain hands-on experience with the NVIDIA Morpheus AI Framework, designed to accelerate GPU-based AI applications for filtering, processing, and classifying large volumes of streaming cybersecurity data.

Additionally, they will learn about the NVIDIA Triton Inference Server, an open-source tool that facilitates standardised deployment and execution of AI models across various workloads. No prerequisites are needed for this tutorial, although familiarity with defensive cybersecurity concepts and the Linux command line is beneficial.

To learn the course and know more in detail check it out here – https://courses.nvidia.com/courses/course-v1:DLI+T-DS-02+V2/

4. Building A Brain in 10 Minutes

This course delves into neural networks’ foundations, drawing from biological and psychological insights. Its objectives are to elucidate how neural networks employ data for learning and to grasp the mathematical principles underlying a neuron’s functioning.

While anyone can execute the code provided to observe its operations, a solid grasp of fundamental Python 3 programming concepts—including functions, loops, dictionaries, and arrays—is advised. Additionally, familiarity with computing regression lines is also recommended.

To learn the course and know more in detail check it out here – https://courses.nvidia.com/courses/course-v1:DLI+T-FX-01+V1/

5. An Introduction to CUDA

This course delves into the fundamentals of writing highly parallel CUDA kernels designed to execute on NVIDIA GPUs.

One can gain proficiency in several key areas: launching massively parallel CUDA kernels on NVIDIA GPUs, orchestrating parallel thread execution for large dataset processing, effectively managing memory transfers between the CPU and GPU, and utilising profiling techniques to analyse and optimise the performance of CUDA code.

Here is the link to know more about the course – https://learn.nvidia.com/courses/course-detail?course_id=course-v1:DLI+T-AC-01+V1

6. Building A Brain in 10 Minutes

This course delves into neural networks’ foundations, drawing from biological and psychological insights. Its objectives are to elucidate how neural networks employ data for learning and to grasp the mathematical principles underlying a neuron’s functioning.

While anyone can execute the code provided to observe its operations, a solid grasp of fundamental Python 3 programming concepts—including functions, loops, dictionaries, and arrays—is advised. Additionally, familiarity with computing regression lines is also recommended.

To learn the course and know more in detail check it out here – https://courses.nvidia.com/courses/course-v1:DLI+T-FX-01+V1/

7. Augment your LLM Using RAG

Retrieval Augmented Generation (RAG), devised by Facebook AI Research in 2020, offers a method to enhance a LLM output by incorporating real-time, domain-specific data, eliminating the need for model retraining. RAG integrates an information retrieval module with a response generator, forming an end-to-end architecture.

Drawing from NVIDIA’s internal practices, this introduction aims to provide a foundational understanding of RAG, including its retrieval mechanism and the essential components within NVIDIA’s AI Foundations framework. By grasping these fundamentals, you can initiate your exploration into LLM and RAG applications.

To learn the course and know more in detail check it out here – https://courses.nvidia.com/courses/course-v1:NVIDIA+S-FX-16+v1/

8. Getting Started with AI on Jetson Nano

The NVIDIA Jetson Nano Developer Kit empowers makers, self-taught developers, and embedded technology enthusiasts worldwide with the capabilities of AI.

This user-friendly, yet powerful computer facilitates the execution of multiple neural networks simultaneously, enabling various applications such as image classification, object detection, segmentation, and speech processing.

Throughout the course, participants will utilise Jupyter iPython notebooks on Jetson Nano to construct a deep learning classification project employing computer vision models.

By the end of the course, individuals will possess the skills to develop their own deep learning classification and regression models leveraging the capabilities of the Jetson Nano.

Here is the link to know more about the course – https://learn.nvidia.com/courses/course-detail?course_id=course-v1:DLI+S-RX-02+V2

9. Building Video AI Applications at the Edge on Jetson Nano

This self-paced online course aims to equip learners with skills in AI-based video understanding using the NVIDIA Jetson Nano Developer Kit. Through practical exercises and Python application samples in JupyterLab notebooks, participants will explore intelligent video analytics (IVA) applications leveraging the NVIDIA DeepStream SDK.

The course covers setting up the Jetson Nano, constructing end-to-end DeepStream pipelines for video analysis, integrating various input and output sources, configuring multiple video streams, and employing alternate inference engines like YOLO.

Prerequisites include basic Linux command line familiarity and understanding Python 3 programming concepts. The course leverages tools like DeepStream, TensorRT, and requires specific hardware components like the Jetson Nano Developer Kit. Assessment is conducted through multiple-choice questions, and a certificate is provided upon completion.

For this course, you will require hardware including the NVIDIA Jetson Nano Developer Kit or the 2GB version, along with compatible power supply, microSD card, USB data cable, and a USB webcam.

To learn the course and know more in detail check it out here – https://courses.nvidia.com/courses/course-v1:DLI+S-IV-02+V2/

10. Build Custom 3D Scene Manipulator Tools on NVIDIA Omniverse

This course offers practical guidance on extending and enhancing 3D tools using the adaptable Omniverse platform. Taught by the Omniverse developer ecosystem team, participants will gain skills to develop advanced tools for creating physically accurate virtual worlds.

Through self-paced exercises, learners will delve into Python coding to craft custom scene manipulator tools within Omniverse. Key learning objectives include launching Omniverse Code, installing/enabling extensions, navigating the USD stage hierarchy, and creating widget manipulators for scale control.

The course also covers fixing broken manipulators and building specialised scale manipulators. Required tools include Omniverse Code, Visual Studio Code, and the Python Extension. Minimum hardware requirements comprise a desktop or laptop computer equipped with an Intel i7 Gen 5 or AMD Ryzen processor, along with an NVIDIA RTX Enabled GPU with 16GB of memory.

To learn the course and know more in detail check it out here – https://courses.nvidia.com/courses/course-v1:DLI+S-OV-06+V1/

11. Getting Started with USD for Collaborative 3D Workflows

In this self-paced course, participants will delve into the creation of scenes using human-readable Universal Scene Description ASCII (USDA) files.

The programme is divided into two sections: USD Fundamentals, introducing OpenUSD without programming, and Advanced USD, using Python to generate USD files.

Participants will learn OpenUSD scene structures and gain hands-on experience with OpenUSD Composition Arcs, including overriding asset properties with Sublayers, combining assets with References, and creating diverse asset states using Variants.

To learn more about the details of the course, here is the link – https://learn.nvidia.com/courses/course-detail?course_id=course-v1:DLI+S-FX-02+V1

12. Assemble a Simple Robot in Isaac Sim

This course offers a practical tutorial on assembling a basic two-wheel mobile robot using the ‘Assemble a Simple Robot’ guide within the Isaac Sim GPU platform. The tutorial spans around 30 minutes and covers key steps such as connecting a local streaming client to an Omniverse Isaac Sim server, loading a USD mock robot into the simulation environment, and configuring joint drives and properties for the robot’s movement.

Additionally, participants will learn to add articulations to the robot. By the end of the course, attendees will gain familiarity with the Isaac Sim interface and documentation necessary to initiate their own robot simulation projects.

The prerequisites for this course include a Windows or Linux computer capable of installing Omniverse Launcher and applications, along with adequate internet bandwidth for client/server streaming. The course is free of charge, with a duration of 30 minutes, focusing on Omniverse technology.

To learn the course and know more in detail check it out here – https://courses.nvidia.com/courses/course-v1:DLI+T-OV-01+V1/

13. How to Build Open USD Applications for industrial twins

This course introduces the basics of the Omniverse development platform. One will learn how to get started building 3D applications and tools that deliver the functionality needed to support industrial use cases and workflows for aggregating and reviewing large facilities such as factories, warehouses, and more.

The learning objectives include building an application from a kit template, customising the application via settings, creating and modifying extensions, and expanding extension functionality with new features.

To learn the course and know more in detail check it out here – https://learn.nvidia.com/courses/course-detail?course_id=course-v1:DLI+S-OV-13+V1

14. Disaster Risk Monitoring Using Satellite Imagery

Created in collaboration with the United Nations Satellite Centre, the course focuses on disaster risk monitoring using satellite imagery, teaching participants to create and implement deep learning models for automated flood detection. The skills gained aim to reduce costs, enhance efficiency, and improve the effectiveness of disaster management efforts.

Participants will learn to execute a machine learning workflow, process large satellite imagery data using hardware-accelerated tools, and apply transfer-learning for building cost-effective deep learning models.

The course also covers deploying models for near real-time analysis and utilising deep learning-based inference for flood event detection and response. Prerequisites include proficiency in Python 3, a basic understanding of machine learning and deep learning concepts, and an interest in satellite imagery manipulation.

To learn the course and know more in detail check it out here – https://courses.nvidia.com/courses/course-v1:DLI+S-ES-01+V1/

15. Introduction to AI in the Data Center

In this course, you will learn about AI use cases, machine learning, and deep learning workflows, as well as the architecture and history of GPUs. With a beginner-friendly approach, the course also covers deployment considerations for AI workloads in data centres, including infrastructure planning and multi-system clusters.

The course is tailored for IT professionals, system and network administrators, DevOps, and data centre professionals.

To learn the course and know more in detail check it out here – https://www.coursera.org/learn/introduction-ai-data-center

16. Fundamentals of Working with Open USD

In this course, participants will explore the foundational concepts of Universal Scene Description (OpenUSD), an open framework for detailed 3D environment creation and collaboration.

Participants will learn to use USD for non-destructive processes, efficient scene assembly with layers, and data separation for optimised 3D workflows across various industries.

Also, the session will cover Layering and Composition essentials, model hierarchy principles for efficient scene structuring, and Scene Graph Instancing for improved scene performance and organisation.

To know more about the course check it out here – https://learn.nvidia.com/courses/course-detail?course_id=course-v1:DLI+S-OV-15+V1

17. Introduction to Physics-informed Machine Learning with Modulus

High-fidelity simulations in science and engineering are hindered by computational expense and time constraints, limiting their iterative use in design and optimisation.

NVIDIA Modulus, a physics machine learning platform, tackles these challenges by creating deep learning models that outperform traditional methods by up to 100,000 times, providing fast and accurate simulation results.

One will learn how Modulus integrates with the Omniverse Platform and how to use its API for data-driven and physics-driven problems, addressing challenges from deep learning to multi-physics simulations.

To learn the course and know more in detail check it out here – https://learn.nvidia.com/courses/course-detail?course_id=course-v1:DLI+S-OV-04+V1

18. Introduction to DOCA for DPUs

The DOCA Software Framework, in partnership with BlueField DPUs, enables rapid application development, transforming networking, security, and storage performance.

This self-paced course covers DOCA fundamentals for accelerated data centre computing on DPUs, including visualising the framework paradigm, studying BlueField DPU specs, exploring sample applications, and identifying opportunities for DPU-accelerated computation.

One gains introductory knowledge to kickstart application development for enhanced data centre services.

To learn the course and know more in detail check it out here – https://learn.nvidia.com/courses/course-detail?course_id=course-v1:DLI+S-NP-01+V1

Additional Inputs Contributed – Gopika Raj

The post 18 Free AI Courses by NVIDIA in 2024 appeared first on AIM.

NVIDIA Boosts Real-Time AI for Healthcare with Enterprise-IGX & Holoscan

At NVIDIA COMPUTEX 2024, the GPU giant announced the availability of NVIDIA AI Enterprise-IGX with Holoscan on the IGX platform, improving real-time AI computing capabilities for healthcare, industrial, and scientific applications at the edge. This integration enables faster development and deployment of AI solutions with enterprise-grade software and support.

NVIDIA AI Enterprise-IGX claims to offer high performance, security, and support for the edge computing software stack, streamlining AI-powered operations and the deployment of AI applications at scale.

The inclusion of Holoscan, a sensor-processing platform, further enhances the development and deployment of AI and high-performance computing applications, delivering real-time insights. The combination of these technologies cuts the time and costs required to build advanced AI solutions across various industries, meeting unique performance and regulatory requirements.

The NVIDIA IGX platform, now supporting the RTX 6000 Ada GPU and the IGX Orin 500 system-on-module, delivers significant improvements in AI performance and computing power.

“As software-defined functionality continues to transform businesses across industries, enterprises are seeking powerful edge AI solutions that can meet their unique performance and regulatory requirements,” said Deepu Talla, vice president of robotics and edge computing at NVIDIA.

Customer Stories

Leading medical technology companies are rapidly adopting NVIDIA IGX with Holoscan. For example, Medtronic uses the platform for its GI Genius intelligent endoscopy module, which is the first FDA-cleared AI-assisted colonoscopy tool. This technology helps physicians detect polyps that could lead to colorectal cancer.

Moon Surgical employs IGX with Holoscan to power its Maestro System, a surgical robotics system designed to assist surgeons with precision and control during minimally invasive procedures. The platform’s capabilities have accelerated the development and enhancement of these medical technologies, ultimately improving patient care and safety.

In the industrial sector, ADLINK leverages NVIDIA IGX to build industrial-grade edge AI solutions that enhance factory automation and robotic collaboration. These solutions improve functional safety and high-bandwidth sensor processing, transforming operations like machine movement routing, robotic arm operation, and charging-station monitoring. The platform’s powerful computing capabilities ensure more efficient and seamless human-robot collaboration.

The SETI Institute is another notable adopter, using NVIDIA IGX Orin to power radio astronomy capabilities at the Hat Creek Radio Observatory. This technology enables the processing of multiple terabits per second of radio telescope data, facilitating the detection of weaker and rarer astrophysical phenomena. The advanced capabilities of the IGX platform, combined with Holoscan, provide exceptional computational performance for real-time radar processing and radio astronomy.

Clinical Trials Optimised with NVIDIA’s Meta Llama 3

Available as an NVIDIA NIM inference microservice, Meta Llama 3 supports a wide range of applications, including surgical planning, digital assistants, drug discovery, and clinical trial optimisation.

At COMPUTEX, NVIDIA announced that hundreds of AI ecosystem partners are integrating NIM into their solutions. Over 40 healthcare and life sciences startups and enterprises are using the Llama 3 NIM to build and run applications that accelerate digital biology, digital surgery, and digital health.

Deloitte, for example, uses the Llama 3 NIM and other microservices for drug discovery and clinical trials, driving efficiency in garnering data-based insights from gene to function.

Transcripta Bio uses Llama 3 for accelerated intelligent drug discovery, leveraging its AI modeling suite, Conductor AI, to predict the effects of new drugs. In clinical trials, companies like Quantiphi and ConcertAI utilise NVIDIA NIM to develop generative AI solutions for research and patient care, enhancing workforce productivity and improving outcomes.

Mendel AI uses the Llama 3 NIM for its Hypercube copilot, offering a 36% performance improvement in understanding medical data at scale.

Precision medicine company SimBioSys uses the Llama 3 NIM to analyse breast cancer diagnoses and provide tailored guidance for physicians. Artisight automates documentation and care coordination with ambient voice and vision systems, while AITEM builds healthcare-specific chatbots. Abridge uses the NIM for clinical conversation summarisation, improving the efficiency and accuracy of physician-patient encounters.

Recently, the company announced its Q1 FY25 results, reporting a profit of $14.881 billion, a 600% increase from the same quarter last year. The company’s revenue reached $26.04 billion, exceeding the $24.65 billion estimate, and earnings per share (EPS) were $6.12, significantly higher than the projected $5.59.

The post NVIDIA Boosts Real-Time AI for Healthcare with Enterprise-IGX & Holoscan appeared first on AIM.

OpenAI Reports Foreign Interference in India’s Lok Sabha Election Result

OpenAI Reports Foreign Interference in India’s Lok Sabha Election Result

As India awaits the outcome of the 2024 Lok Sabha elections, a recent report from OpenAI reveals that Russia, China, Iran, and Israel have leveraged AI models to disseminate false information online.

As per the study, five operations were involved – two from Russia, one from China, one from Iran, and an Israeli political campaign management firm known as STOIC. These operations utilised AI-generated content, including text and images, in their efforts.

One of these campaigns, nicknamed Zero Zeno by STOIC, aimed to influence the 2024 Indian elections, according to OpenAI, the developers behind ChatGPT. However, these campaigns failed to reach a significant audience or drive engagement.

The study indicates that none of these networks achieved substantial real interactions, with OpenAI categorising their impact as below level 2 on a six-level “breakout scale” for assessing the effectiveness of influencer operations.

Responding to OpenAI’s findings, minister of state for electronics and IT Rajeev Chandrasekhar labelled this a “dangerous threat” to democracy, criticising OpenAI for not alerting the public when the threat was initially detected in May.

It is absolutely clear and obvious that @BJP4India was and is the target of influence operations, misinformation and foreign interference, being done by and/or on behalf of some Indian political parties.
This is very dangerous threat to our democracy. It is clear vested… https://t.co/e78pbEuHwe

— Rajeev Chandrasekhar 🇮🇳(Modiyude Kutumbam) (@Rajeev_GoI) May 31, 2024

Earlier reports from the Microsoft Threat Analysis Centre (MTAC) also highlighted China’s attempts to use AI-generated content to influence elections in various countries, including India, the US, and South Korea. Similar tactics were reportedly tested during Taiwan’s presidential elections.

Despite these revelations, political parties such as the BJP and Congress have acknowledged the use of AI tools for campaigning. Political strategists claim that AI has become a significant asset, with the BJP leading in GenAI utilisation for electoral purposes. The Congress, however, is using AI minimally or barely at all.

In a conversation with AIM on the status of AI integration in the 2024 elections, independent political campaigner and strategist Sagar Vishnoi pointed out that the BJP leads the way in employing AI to translate their messaging across various languages.

The post OpenAI Reports Foreign Interference in India’s Lok Sabha Election Result appeared first on AIM.

Yann LeCun Criticises Elon Musk for ‘Batshit-Crazy’ Conspiracy Theories

The drama continues. Yann LeCun, chief AI scientist at Meta, is clearly disappointed in Elon Musk and has voiced substantial criticism of his approach to technology development, media, and political discourse on every possible social media platform, including Threads.

LeCun lauded Musk’s innovation, and said “I like his cars, his rockets, his solar energy systems, and his satellite communication system. I also like his positions on open source and patents.”

However, LeCun’s admiration for Musk’s technological achievements is tempered by significant disagreements on other fronts.

LeCun expressed alarm at Musk’s promotion of ‘batshit-crazy’ conspiracy theories, citing examples such as “boosting ‘PizzaGate'” and false claims about illegal immigrants corrupting elections. “One would expect a technological visionary to be a rationalist. Rationalism doesn’t work without Truth,” LeCun said.

Further, he expressed concern about Musk’s treatment of scientists and the impact of secrecy on research progress. “Technology/product development may not need openness and publications to advance, but forward-looking research sure does,”

He added that secrecy hampers progress and discourages talent from joining efforts in areas like AI, neural interfaces, and material science.

He also criticised Musk’s habit of making grandiose, often unrealistic predictions. “Expressing an ambitious vision for the future is great. But telling the public blatantly false predictions (‘AGI next year’, ‘1 million robotaxis by 2020’, ‘AGI will kill us all, let’s pause’) is very counterproductive and illegal in some cases,” LeCun asserted.

LeCun’s critique extends to Musk’s public stances on political issues, journalism, and academia. He argued that Musk’s positions are “not just wrong but dangerous for democracy, civilization, and human welfare.”

Highlighting the importance of a free and diverse press, LeCun said, “Democracy can’t exist without professional journalists working for a free and diverse press. Only authoritarian enemies of democracy rail against the media.”

LeCun is particularly concerned about Musk’s acquisition of a social media platform, using it to spread what LeCun sees as dangerous political opinions and conspiracy theories.

He criticised Musk’s approach to content moderation, stressing the legal and ethical necessity of regulating certain types of content. “Content moderation is a complicated problem whose best answer is not an attitude of total laissez-faire but a complex trade-off,” LeCun concluded.

LeCun has been engaged in heated discussions with Musk on X over the past few days, debating topics ranging from AGI to the critical role of research scientists in the development of technology companies and products.

Musk being Musk, is surely having fun with the recent banter with LeCun.

this is drake vs kendrick for people who know linear algebra pic.twitter.com/VkgAmIEEJU

— sophie (@netcapgirl) June 1, 2024

​​”Who are you again? I keep forgetting,” taunted Musk. To this, LeCun said: “you know, the guy with a good Twitter game.”

If this drama continues, or takes a serious turn, LeCun might find himself banned from X soon.

The post Yann LeCun Criticises Elon Musk for ‘Batshit-Crazy’ Conspiracy Theories appeared first on AIM.

NVIDIA Unveils ‘Rubin’ Months Ahead of Blackwell Release, AMD Announces ‘Turin’ to Compete

NVIDIA Unveils ‘Rubin’ Months Ahead of Blackwell Release, AMD Announces ‘Turin’ to Compete

At Taipei’s Computex Conference, NVIDIA CEO Jensen Huang announced the launch of the Rubin AI chip platform, slated for 2026, and the Blackwell Ultra chip, expected in 2025, marking a shift to an annual update cycle for NVIDIA’s AI accelerators.

The Rubin architecture follows the March announcement of the Blackwell model, which is set to ship later in 2024. “We are seeing computation inflation,” Huang stated, highlighting the need for accelerated computing to manage the growing data processing demands. He emphasised NVIDIA’s technology, which promises 98% cost savings and 97% less energy consumption.

Previously, NVIDIA had a two-year update timeline for its AI chips. The shift to an annual release schedule underscores the competitive intensity in the AI chip market and NVIDIA’s efforts to maintain its leadership. The Rubin platform will feature new GPUs and a central processor named Vera, although details were scarce.

Huang announced that the forthcoming Rubin AI platform will incorporate HBM4, the next generation of high-bandwidth memory. This memory type has become a bottleneck in AI accelerator production due to high demand, with leading supplier SK Hynix Inc. largely sold out through 2025. Huang did not provide detailed specifications for the Rubin platform, which is set to succeed Blackwell.

AMD Focusing on AI Workloads

Not just NVIDIA, during the opening keynote at Computex 2024, AMD Chair and CEO Lisa Su showcased the growing momentum of the AMD Instinct accelerator family. AMD unveiled a multiyear, expanded AMD Instinct accelerator roadmap, introducing an annual cadence of leadership AI performance and memory capabilities.

In 2026, AMD plans to release the AMD Instinct MI400 series, based on the AMD CDNA “Next” architecture, which will provide the latest features and capabilities to enhance performance and efficiency for AI training and inference.

Previewed at Computex, the 5th Gen AMD EPYC processors, codenamed “Turin”, will utilise the “Zen 5” core, continuing the high performance and efficiency of the AMD EPYC processor family. These processors are expected to be available in the second half of 2024.

The roadmap begins with the AMD Instinct MI325X accelerator, set to be available in Q4 2024. This accelerator will feature 288GB of HBM3E memory and 6 terabytes per second of memory bandwidth, using the same Universal Baseboard design as the MI300 series. It boasts industry-leading memory capacity and bandwidth, being 2x and 1.3x better than the competition, respectively, and offering 1.3x better compute performance.

Following this, the AMD Instinct MI350 series, powered by the new AMD CDNA 4 architecture, is expected in 2025. It promises up to a 35x increase in AI inference performance compared to the MI300 series with CDNA 3 architecture.

The AMD Instinct MI350X accelerator will be the first product in this series, utilising advanced 3 nm process technology, supporting FP4 and FP6 AI data types, and including up to 288 GB of HBM3E memory.

The post NVIDIA Unveils ‘Rubin’ Months Ahead of Blackwell Release, AMD Announces ‘Turin’ to Compete appeared first on AIM.

AMD Unveils Ryzen AI 300, Aimed at Outperforming Qualcomm’s Chips for Copilot+ PCs

AMD Unveils Ryzen AI 300, Aimed at Outperforming Qualcomm's Chips for Copilot+ PCs

In a bold move, AMD has announced its latest CPUs, targeting the Microsoft’s aim of Copilot+ PC with the Ryzen AI 300 series. These new chips are designed to outclass ARM-based competitors, particularly Qualcomm’s Snapdragon X Elite and X Plus.

Announced at the Taiwan Computex conference, AMD is set to introduce the Ryzen AI 300 series in Copilot+ PCs within a few months. These chips promise superior performance compared to both Intel and Qualcomm offerings, featuring advanced neural processing capabilities.

The tech industry remains a battleground for chipmakers, who are increasingly emphasising AI-enhanced performance. AMD’s strategy includes promoting its Ryzen 9000 series for gaming and the Ryzen AI 300 series, which incorporates the new XDNA 2 NPU. The Ryzen AI 9 365 and the Ryzen AI 9 HX 370, the two chips in this series, represent a significant leap in neural processing power.

Microsoft requires NPUs with at least 40 TOPS for Copilot+ PCs, and AMD’s HX 370 and 365 exceed this with 50 TOPS.

“This is an incredibly exciting time for AMD as the rapid and accelerating adoption of AI is driving increased demand for our high-performance computing platforms,” said Lisa Su, chair and CEO of AMD. “At Computex, we were proud to be joined by Microsoft, HP, Lenovo, Asus and other strategic partners to launch our next-generation Ryzen desktop and notebook processors, preview the leadership performance of our next-generation EPYC processors, and announce a new annual cadence for AMD Instinct AI accelerators.”

The Ryzen AI 9 HX 370 boasts 12 cores, 24 threads, and a 5.1 GHz max boost speed, while the Ryzen AI 9 365 features 10 cores, 20 threads, and a 5.0 GHz max speed. Both chips include the RDNA 3.5 built-in GPU for mobile graphics and gaming. These CPUs will debut on new laptops during the Taiwan Computex conference in the coming days.

AMD’s new CPUs are built on the Zen 5 architecture, promising a significant improvement over Zen 4 with double the bandwidth. AMD claims users will experience up to 19% better performance in Geekbench 6 and 13% better in 3DMark physics tests, depending on specific configurations.

For gamers, the Ryzen 9000 series, including the Ryzen 5 9600X, Ryzen 7 9700X, Ryzen 9 9900X, and the high-end Ryzen 9 9950X, offers slightly higher clock speeds and improved power efficiency. The 9950X, for instance, features 16 cores, 32 threads, and a 5.7 GHz clock speed, matching the previous Ryzen 9 7950X3D but with better performance in games like Cyberpunk 2077 and F1 2023.

These gaming-focused CPUs will be available in July. AMD has also committed to supporting the AM5 socket through 2027, providing a clear upgrade path for current users, while support for the AM4 socket will end around 2025.

Intel has revealed that starting from the third quarter of 2024, its highly anticipated client processors, codenamed Lunar Lake, are slated to power over 80 fresh laptop designs across more than 20 OEMs.

Lunar Lake is boasting over three times the AI performance of its predecessors. With an impressive 40+ NPU TOPS, Intel’s next-gen processors are poised to deliver the capabilities required for the upcoming Copilot+ experiences. Moreover, Lunar Lake will feature over 60 GPU TOPS, amounting to more than 100 platform TOPS in total.

The post AMD Unveils Ryzen AI 300, Aimed at Outperforming Qualcomm’s Chips for Copilot+ PCs appeared first on AIM.

Yann LeCun Lashes Out at Elon Musk for His ‘Batshit-Crazy’ Theories 

The drama continues. Yann LeCun, chief AI scientist at Meta, is clearly disappointed in Elon Musk and has voiced substantial criticism of his approach to technology development, media, and political discourse on every possible social media platform, including Threads.

LeCun lauded Musk’s innovation, and said “I like his cars, his rockets, his solar energy systems, and his satellite communication system. I also like his positions on open source and patents.”

However, LeCun’s admiration for Musk’s technological achievements is tempered by significant disagreements on other fronts.

LeCun expressed alarm at Musk’s promotion of ‘batshit-crazy’ conspiracy theories, citing examples such as “boosting ‘PizzaGate'” and false claims about illegal immigrants corrupting elections. “One would expect a technological visionary to be a rationalist. Rationalism doesn’t work without Truth,” LeCun said.

Further, he expressed concern about Musk’s treatment of scientists and the impact of secrecy on research progress. “Technology/product development may not need openness and publications to advance, but forward-looking research sure does,”

He added that secrecy hampers progress and discourages talent from joining efforts in areas like AI, neural interfaces, and material science.

He also criticised Musk’s habit of making grandiose, often unrealistic predictions. “Expressing an ambitious vision for the future is great. But telling the public blatantly false predictions (‘AGI next year’, ‘1 million robotaxis by 2020’, ‘AGI will kill us all, let’s pause’) is very counterproductive and illegal in some cases,” LeCun asserted.

LeCun’s critique extends to Musk’s public stances on political issues, journalism, and academia. He argued that Musk’s positions are “not just wrong but dangerous for democracy, civilization, and human welfare.”

Highlighting the importance of a free and diverse press, LeCun said, “Democracy can’t exist without professional journalists working for a free and diverse press. Only authoritarian enemies of democracy rail against the media.”

LeCun is particularly concerned about Musk’s acquisition of a social media platform, using it to spread what LeCun sees as dangerous political opinions and conspiracy theories.

He criticised Musk’s approach to content moderation, stressing the legal and ethical necessity of regulating certain types of content. “Content moderation is a complicated problem whose best answer is not an attitude of total laissez-faire but a complex trade-off,” LeCun concluded.

LeCun has been engaged in heated discussions with Musk on X over the past few days, debating topics ranging from AGI to the critical role of research scientists in the development of technology companies and products.

Musk being Musk, is surely having fun with the recent banter with LeCun.

this is drake vs kendrick for people who know linear algebra pic.twitter.com/VkgAmIEEJU

— sophie (@netcapgirl) June 1, 2024

​​”Who are you again? I keep forgetting,” taunted Musk. To this, LeCun said: “you know, the guy with a good Twitter game.”

If this drama continues, or takes a serious turn, LeCun might find himself banned from X soon.

The post Yann LeCun Lashes Out at Elon Musk for His ‘Batshit-Crazy’ Theories appeared first on AIM.