Want an AI Job? Check out these new AWS AI certifications

certification concept

I know many people who are twitchy about AI taking their jobs away. I hear you, and I understand. But AI is also opening the door to new jobs. While the top positions demand esoteric skills, such as knowing how to program with OpenCV, PyTorch, and TensorFlow, there are also jobs out there for people who aren't computer scientists. Amazon Web Services (AWS) is opening the doors for you and me with a suite of training courses and new certifications.

At first, prompt engineering — a fancy way of asking AI chatbots nicely for the best answers — seemed like the way for non-experts to get AI jobs. It turns out that large language models (LLM) can write and optimize their own prompts just fine.

Also: The best free AI courses

However, there are many other AI-related jobs where you can make good money. Indeed, according to AWS, employers are willing to pay you up to 47% higher salaries if you've got AI skills. While top dollar goes to IT professionals, AWS has also found that sales and marketing workers can get 43% higher salaries, while finance professionals can make up to 42% higher. That's not hay!

To help workers build and prove their AI expertise, Amazon Web Services (AWS) has announced a new AI training portfolio to equip you with the necessary AI skills and two new certifications to show that you know your stuff. These include:

AWS Certified AI Practitioner: This foundational-level certification is designed for non-technical workers. It helps you to demonstrate that you know your way around AI, machine learning (ML), and generative AI concepts and tools. This certification is ideal for professionals in marketing, sales, project and product management, HR, finance, and other non-IT roles who want to identify AI opportunities and collaborate effectively with technical teams.

AWS Certified Machine Learning Engineer — Associate: This certification is for a technical worker. It's meant to show that you have the skills needed to build, deploy, and maintain AI and ML solutions. It covers essential aspects like optimizing model performance, managing computational resources, updating model versions, and securing AI solutions. This certification is crucial for IT professionals leveraging AI to meet business objectives.

To prepare for these certifications, AWS has launched eleven new training courses on AWS Skill Builder, its digital learning center. Individual subscriptions to this site are $29 a month or $449 a year.

The AI courses include foundational topics such as:

  • Fundamentals of Machine Learning and Artificial Intelligence

  • Exploring Artificial Intelligence Use Cases and Application

  • Essentials of Prompt Engineering

For those aiming for the higher-level AWS Certified Machine Learning Engineer – Associate certification, additional courses cover advanced topics like data transformation techniques, feature engineering, bias mitigation strategies, and data security.

The AWS Certified AI Practitioner training resources include eight free courses. In these, you'll learn about real-world use cases for AI, ML, and generative AI, how to select a foundation model (FM), the concepts and techniques involved in crafting effective prompts, and more.

Also: Want to work in AI? How to pivot your career in 5 steps

Altogether, AWS offers more than 100 AI, ML, and generative AI courses and learning resources on AWS Skill Builder and AWS Educate to help you prepare for the future of work.

This follows Amazon's "AI Ready" initiative, which was meant to provide free AI skills training to 2 million people globally by 2025. It's also, of course, part of the AWS Certification system.

On August 13, you can register for the beta exams of AWS Certified AI Practitioner and AWS Certified Machine Learning Engineer – Associate. The complete certifications should be available by the end of the year.

Also: I spent a weekend with Amazon's free AI courses, and highly recommend you do too

Certifications are valuable for getting jobs. That's especially true in a field like AI, which is so new that it's difficult for employers to know if you really know your way around AI. I strongly suggest that if you think your future lies in AI work, or if just having AI skills can help you with your current job, you give these programs a try. The financial benefits speak for themselves.

Featured

ZutaCore & NeevCloud Partner to Drive Sustainable AI with HyperCool Tech

ZutaCore, a provider of direct-to-chip, waterless liquid cooling solutions, has partnered with NeevCloud, the company behind India’s first AI SuperCloud, to deploy sustainable AI at scale throughout India.

By leveraging ZutaCore’s HyperCool technology in all of its data centers, NeevCloud aims to enable the country to harness the power of AI while significantly reducing environmental impact.

The partnership between ZutaCore and NeevCloud is set to revolutionise the AI landscape in India by pioneering a “Sustainable AI for the Masses” model. ZutaCore’s innovative HyperCool technology reduces water usage by up to 90% and energy consumption by 40% in data centers, making it an ideal solution for NeevCloud’s mission to drive sustainable AI adoption.

Narendra Sen, CEO of NeevCloud, expressed his ambition to construct an AI cloud infrastructure tailored for Indian clients, comprising 40,000 graphics processing units (GPUs) by 2026. This infrastructure aims to support Indian enterprises with training, inference, and other AI workloads. NeevCloud has already placed an order for 8,000 NVIDIA GPUs with HPE and plans to deploy another 12,000-15,000 GPUs by 2025.

To fund the acquisition of the 40,000 GPUs, estimated to cost approximately $1.5 billion, NeevCloud has partnered with three large data center companies in India. The partnership involves a revenue-sharing model, where the data center partners will deploy the GPUs on behalf of NeevCloud, while NeevCloud will bring in the customers to access both the AI cloud capacity and the data center servers.

NeevCloud’s ambitious plans position the company as a strong competitor in the AI-as-a-infrastructure space, alongside other prominent players such as Yotta, E2E Network, Krutrim AI Cloud, and Tata Communications. With the integration of ZutaCore’s HyperCool technology, NeevCloud is set to lead the way in sustainable AI adoption in India.

The post ZutaCore & NeevCloud Partner to Drive Sustainable AI with HyperCool Tech appeared first on AIM.

Steve Wozniak Says he is not impressed by Apple Intelligence 

Steve Wozniak, the co-founder of Apple, is unimpressed with Apple’s Intelligence’s demo and asked the users to try it themselves before forming any opinion in a recent interview.

When asked if he was impressed, Wozniak candidly replied, ‘No, I was impressed with the demos, I always am, but I still strongly believe you should try it and find out after it’s over, you know, what works, what doesn’t, what didn’t come through the way you expected it to.'”

He further praised Apple’s WWDC event, saying, “I come to these events or I watch them. And when they’re all over, the WWDC conferences are my favorite. They’re not introducing hardware, they’re focusing on software, which brings changes to how we live and work.”

He added that he is very interested in Apple Intelligence, though. ‘Taking your bits of data and not the bits of data from around the world out there on the web or anything and trying to make better sense of what you’re doing with your product,’ said Wozniak.

Moreover, he joked that just like Apple Intelligence, he has his own intelligence which he calls ‘Actual Intelligence.’ He hopes that the new Siri is better than the previous one.

“If you ask something that any human being would understand but is slightly complicated, with one extra word, Siri sometimes has trouble with it. So I’m hoping to see that it finally improved because I like using Siri an awful lot,” he said.”

On the other hand, Elon Musk, Tesla chief , has also been vocal about his concerns regarding Apple’s partnership with OpenAI. “If Apple integrates OpenAI at the OS level, then Apple devices will be banned at my companies. That is an unacceptable security violation,” he posted on X.

pic.twitter.com/7OgZAAdPf6

— Elon Musk (@elonmusk) June 10, 2024

Muak recently criticized OpenAI’s operational transparency and governance and filed a lawsuit against the company alleging that OpenAI has strayed from its founding principles, which aimed to develop artificial general intelligence (AGI) for the benefit of humanity rather than for commercial gain.

Apple is all set to boost its Siri voice assistant and operating systems with OpenAI’s ChatGPT as it seeks to catch up in the AI race. It is part of a new personalised AI system – called “Apple Intelligence” – that aims to offer users a way to navigate Apple devices more easily. Updates to its iPhone and Mac operating systems will allow access to ChatGPT through a partnership with OpenAI.

The post Steve Wozniak Says he is not impressed by Apple Intelligence appeared first on AIM.

AMD and the future of the AI PC

AMD and the future of the AI PC

The last 18 months have seen a whirlwind of industry interest around artificial intelligence (AI), including the introduction of a new class of AI-oriented systems referred to as "AI PCs." The speed of AI arrival and the newness of AI PCs naturally raise questions about how much—and when—artificial intelligence is likely to matter to your organization.

While the particulars vary by industry, major software vendors in a host of fields are either developing new AI-based products or integrating AI processing into existing software suites. These updates and capabilities are already rolling out to end users, but haphazard, untracked usage across an organization isn't the way to achieve optimal outcomes with AI.

AI deployments work best when undertaken thoughtfully, with clear goals and effective metrics for measuring whether those goals have been achieved. People often need time to experiment with and adjust to a new technology, whether that means a new data analytics platform or an internal chatbot.

What's an AI PC?

While the exact meaning of the term varies depending on the organization, AI PCs generally contain a central processing unit (CPU), graphics processing unit (GPU), and dedicated neural processing unit (NPU). These new AI capabilities sometimes carry their own branding; AMD, for example, refers to the AI processing proficiency of its CPU, GPU, and NPU under the brand "Ryzen AI."

CPUs and GPUs have existed for decades, but integrating a dedicated AI processor to handle emerging AI workloads is a recent innovation. AMD launched the first laptop processors with an on-die neural processor in 2023, and the first NPU-equipped desktop chips in 2024. While AI workloads can run on the CPU, GPU, or NPU, NPU-equipped systems can potentially execute these workloads far more efficiently, helping with reducing power consumption and saving the CPU and GPU for other tasks.

Bringing hardware on-die to help reduce power and improve performance is part of how semiconductor manufacturers incorporate new features, especially when those features radically expand user access to a particular type of computing.

There are two particular examples of this trend that are relevant to the larger conversation around AI today. In the 1980s, many consumer PCs shipped with CPUs that were only designed to handle integer calculations in hardware. Floating-point calculations that required a decimal point were handled via software emulation or through a specialized co-processor known as a floating-point unit (FPU) that sat in its own motherboard socket. As manufacturing technology advanced, chip designers moved the FPU on-die, making it more readily available to both software developers and end users. The ability to handle floating-point math expanded the fields the PC could address, including 3D gaming and high-performance computing (HPC) workloads.

The consumer 3D graphics accelerators that emerged in the mid-to-late 1990s are another example of how integrating new capabilities and technologies can transform the PC. The first GPUs were discrete cards; motherboards with "onboard" graphics existed, but the performance of these solutions was quite low in comparison to a standalone card. Bringing graphics capabilities aboard the processor allowed semiconductor manufacturers to dramatically improve the GPU's performance and power consumption.

Many applications, including web browsers and operating systems, now use the GPU for rendering, while the widespread proliferation of video services across the internet was partly made possible by low-power video encoders baked into modern desktop, laptop, and mobile chips. In both cases, bringing these specialized processors onboard the CPU increased consumer access to the underlying technologies, allowed for greater innovation across the PC industry, and reduced cost. Over time, the relatively staid ability to run floating-point workloads or to handle video decode in a dedicated, on-die function block has had a transformative impact on the long-term evolution of the PC. AI is likely to follow a similar trajectory.

"Transformative impact" is a big label to hang on any technology, especially one as nascent as AI, but the services and capabilities now rolling out across the industry imply the label isn't undeserved. Historically, if you wanted to use a computer to create something complex, detailed, or nuanced, you needed to be well-versed in an application or three. The more advanced your project, the more thorough your own knowledge needed to be. This was true in 1984 and it's still mostly true in 2024. But AI has the potential to upend this axiom by closing the knowledge gap between what a user wants to accomplish with a PC and what they already know how to achieve.

There are now a number of competing commercial services that can turn text into images, while text-to-video concepts have been demoed. Different companies are working on digital personal assistants, with implementation concepts ranging from integrated website chatbots to holistic tools that could monitor a smart home or interact with an end-user's PC. What unites these disparate products and efforts is the idea that AI's greater contextual awareness and the ability to translate written or spoken text into a coherent directive will lead to better computing experiences—and, by extension, more useful computers.

The exact impact AI will have on your business depends on the business. In some contexts, that might mean an AI providing document summaries, transcripts, and translation services. In others, it might mean using AI for unstructured data analysis or deploying it within a 3D modeling application to allow the end-user to create and design in plain language.

Why Invest Now?

Corporate PC fleets are typically refreshed on a 3-4 year timeline, which means plenty of newly minted systems today could be running AI workloads in a year or two. Companies that start evaluating how to best integrate AI into their existing systems and processes now will be better positioned to improve overall workforce productivity, outpace their competitors, and take advantage of the benefits AI offers as the technology continues to mature. This is in addition to the standard benefits from newer system deployments, including TCO and overall energy efficiency. If you are interested in comparing the latest Ryzen processor-based systems, the AMD Processor Efficiency Calculator offers power consumption estimates on a range of Ryzen and Ryzen PRO processor-based laptop computers.

One of the best ways to ensure that your PC fleet is ready to handle these workloads is to invest in PCs built with AMD Ryzen PRO processors, featuring Ryzen AI. AMD led the x86 processor market with a 10 TOPS (trillions of operations per second) NPU in 2023, and select models of the recently launched Ryzen Mobile 8040 Series and desktop Ryzen 8000G Series processors offer a 16 TOPS NPU. AMD has worked with over a hundred software vendors to provide broad ecosystem compatibility and is deeply committed to supporting AI and its emerging use cases. General software support for AI is advancing as developers integrate AI into already-established products and new, AI-based applications come to market.

AI is real. Underneath the hype and still-uncertain effects is a technology that's already driving productivity gains and customer experience improvements. The question isn't if AI will impact computing and business at large, but when—and which companies will be best positioned to take advantage of it.

Air India is Using GPT-4 to Power its Virtual Assistant

Air India is using OpenAI’s most advanced Large Language Model (LLM) GPT-4 to power its virtual assistant, revealed Viju Chacko, VP – head of digital architecture at Air India.

“We were using GPT3.5, but now we transitioned to GPT-4,” Chacko revealed while speaking at GitHub Galaxy 2024, held in Bengaluru, India.

Earlier this year, Air India CEO Campbell Wilson stressed the carrier’s desire to leverage GPT-4.

Last year, Air India announced that it was the world’s inaugural airline to implement a Generative AI virtual agent, dubbed ‘Maharaja,’ leveraging Microsoft’s Azure OpenAI service.

The AI chatbot manages multiple customer queries across 1,300 areas, including flight status, baggage allowances, flight changes, and refunds, airport lounge access, packing restrictions, check-in procedures, and frequent flyer awards.

In January, Air India also announced a WhatsApp version of the chatbot. Travellers can access the chatbot via a dedicated number.

Air India desires to be one of the most technologically advanced carriers in the world. Last year, it successfully migrated to a cloud-only IT infrastructure, having closed its historic data centres in Mumbai and New Delhi.

This makes it one of the first major global airlines to have moved all computational workloads exclusively to the cloud.

The post Air India is Using GPT-4 to Power its Virtual Assistant appeared first on AIM.

From chaos to creation: How data labeling drives success in generative AI

From chaos to creation: How data labeling drives success in generative AI

Training data is the cornerstone of AI algorithms – the quality of outputs is contingent on the data the AI model was trained on. The data determines the success of AI models, underscoring the importance of data labeling.

Data labeling plays a crucial role in generative AI by providing context and meaning to the data used for training machine learning algorithms, enabling them to generate more meaningful outputs. For example, ChatGPT was trained on both labeled and unlabeled datasets. The labeled data included over 160,000 dialogues between human participants.

Let’s discover the power of data labeling in generative AI.

What is data labeling

Data labeling involves identifying objects within raw digital data (images, text files, videos, etc.) and adding informative labels or tags to them to enable AI models to make accurate predictions and assessments. In other words, AI/ML models learn the context from labeled data. For example, labels identify and tag a dog or cat in a photo, words uttered in an audio recording, or a tumor in a CT scan. Data labeling has a wide range of applications across industries, some of the most common use cases include, computer vision, natural language processing (NLP), and speech recognition

Data labeling for generative AI

The emergence of large language models and generative AI has significantly increased the demand for quality data. Most machine learning models use supervised learning, which involves using an algorithm to map inputs to desired outputs. For supervised learning to work, you need a dataset with predefined labels that the model can learn from to make predictions and correct decisions. The algorithm learns from the labels, allowing it to evaluate its accuracy and improve over time.

Generative AI and large language models (LLMs) are trained on vast amounts of data, providing them with a broad knowledge base stored in their pre-trained weights. However, they might still struggle with specific problems due to a lack of focused information. This is where data labeling comes in.

Fine-tuning LLMs has become an important step in training them to generate creative content or translate languages. The process involves using labeled datasets specifically designed for instruction tuning to further train publicly available LLMs like GPT-3. Let’s look at the significant roles of data labeling in generative AI.

Quality optimization: Data labeling drives up the quality and accuracy of the training data. Annotators meticulously categorize different scenarios within the data, ensuring AI models learn effectively from accurate information.

Semantic understanding: Generative AI models need to understand the context and meaning of the raw data they learn from to create outputs that are more accurate, coherent, and relevant. Data labeling provides context and meaning for the training data, allowing models to develop a deeper semantic understanding and generate outputs that make sense in the context.

Supervised learning: In supervised learning, labeled data are used to train models to figure out the correct outputs for specific inputs. Data labeling gives models the instructions on type of output expected, helping them deliver the desired outcomes.

Biased mitigation: Data labeling helps fight bias in generative AI models. Biases surface due to limited and narrow data representing a particular group. Data labeling allows for more control over the information the generative AI model is trained on. By using carefully curated and labeled data that represents a wide range of perspectives, situations, and people, we can guide the model for a balanced understanding

Type of data annotation for generative AI

Various data annotation methods are used for generative AI. Each technique involves labeling data with specific attributes or features, enabling the models to learn underlying patterns and relationships with the data and create new content.

Image annotation: It is the process of adding descriptive tags or labels to objects or people in an image.

Entity recognition: It involves identifying and labeling important keywords or phrases within a text, such as identifying names, locations, or organizations (eg. Albert Einstein, London, Google).

Sentiment analysis: This method focuses on understanding emotions or sentiments in a piece of text data and assigning labels, such as positive, negative, or neutral (fantastic, awful, or indifferent)

Metadata annotation: Extra information is added to raw data for context, assisting generative AI models in understanding the data in its broader context for more accurate analysis and interpretation. This includes details like location data, author information, timestamps, image source, and other relevant details that help the model to better understand the context of the data.

Conversation categorization: It focuses on classifying text data into different categories based on its topic or purpose, such as general inquiries, sales discussions, or customer complaints. This type of labeling helps AI models interpret the overall goal of the conversation and respond appropriately.

Final words

Data labeling empowers generative AI models to achieve superior performance by enabling them to generate more accurate and meaningful outputs suited to specific goals. Major AI companies, including OpenAI and Meta, have reportedly hired hundreds or thousands of human labelers to handle the massive amounts of data needed for fine-tuning ChatGPT and Llama 2, respectively. This underscores the importance of data labeling in advancing generative AI.

Top 10 Scarily Realistic Videos Generated by Kling, the Chinese Alternative to Sora

As an answer to OpenAI’s Sora, Chinese technology company Kuaishou introduced Kling, a new text-to-video AI model capable of generating high-quality videos.

The model can create large-scale realistic motions which essentially simulate physical world characteristics and has the ability to produce two-minute videos in 1080p resolution at 30 frames per second, ensuring clear and visually appealing videos.

Several AI enthusiasts shared their creations from Kling on X. The model generates videos that seem to be accurately simulating real-world physical properties by using advanced 3D face and body reconstruction backed by the company’s proprietary technology, allowing users to create videos in various aspect ratios.

Here are the Top 10 mind-blowing videos produced by Kling AI.

‘Bill Smith’ Eating Spaghetti

The AI-generated video of Will Smith eating spaghetti had captivated and unsettled viewers with its bizarre and surreal imagery, becoming a viral meme. Smith’s humorous recreation of the video added to its popularity, showcasing his engagement with digital culture.

However, the latest iteration, where ‘Bill Smith’ consumes spaghetti, highlights the unsettling potential of AI in creating uncanny content.

The Mad Max Beer Commercial

The Mad Max beer commercial has become a viral sensation due to its eerie and surreal depiction of a dystopian world where characters consume beer in bizarre scenarios. The commercial has been described as both fascinating and unsettling, highlighting the advanced capabilities of AI in media production.

This unique blend of futuristic aesthetics and unsettling imagery has sparked discussions about the potential and ethical implications of AI in advertising.

‘007 Dog Wars’

The ‘007 Dog Wars’ video is an innovative blend of James Bond themes with a canine twist, featuring dogs in action-packed, espionage-inspired scenarios. This video is praised for its creativity and high-quality visuals, demonstrating the advanced capabilities of Kling AI’s video production tools.

The unique and entertaining concept has garnered positive reactions, showcasing the potential for AI in creating engaging and imaginative content.

Chef Chopping Onions

The recent AI video of a chef chopping onions is a remarkable display of AI capabilities in creating realistic and engaging content. The animation captures the meticulous details of the chef’s movements and the precise handling of the knife.

Overall, it showcases its potential in generating high-quality, lifelike video content, making it a standout in the realm of digital animation.

A Long-haired Girl is Singing to Her Phone

This video explores content that often demonstrates advancements in natural language processing technologies, showcasing the ability to mimic human-authored text with varying degrees of coherence and contextuality.

Sea Creatures Under the Sea

In this AI – generated video, it shows an underwater world which is home to a vast array of fascinating sea creatures, with vibrant colours, each adapted to the environment and shows their impressive capabilities. The model’s ability to create realistic videos from text descriptions is so skilled, and it’s potential to revolutionise the way videos are created.

Hulk and Thor Dancing in Front of Iron Man

Kling AI’s video featuring Hulk, Thor, and Iron Man dancing exemplifies the seamless integration of AI technology in entertainment. Through advanced animation algorithms, the characters exhibit life-like movements and expressions, enhancing the immersive experience.

The Rabbit who Reads the Newspaper

This AI-generated video of a rabbit reading the newspaper wearing glasses showcases AI’s remarkable capabilities in character animation. Utilising advanced algorithms, the AI breathes life into the rabbit, rendering its movements and expressions with precision and realism.

The attention to detail in the rabbit’s mannerisms highlights the sophisticated programming behind the scenes. This application of AI demonstrates how technology can transform simple actions into engaging, professional-quality content.

A Little Man with Blocks Visiting an Art Gallery

The AI video of a Lego man visiting a gallery brilliantly showcases AI’s potential in animation and storytelling. The character’s movements within the gallery are rendered with detail. The AI ensures each gesture and reaction is natural, enhancing the viewer’s engagement and the character’s believability.

Closeup of Ice Cubes and Green Lemon Slices Moving in Water

Kling AI’s video featuring a closeup of ice cubes and green lemon slices moving in water exemplifies the cutting-edge use of AI in visual effects. Advanced AI algorithms meticulously simulate the physical properties of light refraction, fluid dynamics, and natural movement, creating a highly realistic and captivating scene.

The post Top 10 Scarily Realistic Videos Generated by Kling, the Chinese Alternative to Sora appeared first on AIM.

Everything to know about Apple’s AI features coming to iPhones, Macs, and iPads

iPhone 15 Pro and iPhone 15 Pro Max (on ZDNET Energy Yellow background)

During WWDC 2024, Apple poured a big vat of artificial intelligence onto expectant viewers, leaving us drenched in new AI features under the banner of Apple Intelligence. But how do all these features work?

Featured

GitHub is Madly in Love with India’s Burgeoning Developer Ecosystem

GitHub-Loves-Indian-Coding-Talent

GitHub believes India will overtake the US as the largest developer community on the platform by 2027. To foster this ecosystem and assist developers across India and beyond, it has partnered with Indian IT firm Infosys and opened the first GitHub Center of Excellence at Infosys, Bengaluru.

This partnership represents a generational opportunity for Global Systems Integrators (GSIs) to spearhead advancements in the AI and software sectors.

Intuit WDSW June 2024

Register Here

“A new day has begun for the world’s GSIs. The Age of Copilot is here,” said GitHub chief Thomas Dohmke, who is in Bengaluru to attend GitHub Constellation 2024 scheduled for June 12. He added that by equipping their developers with GitHub Copilot and extending its capabilities to customers, GSIs can dramatically accelerate software production worldwide.

Open Healthcare Network in India is a profoundly inspiring story of how we can accelerate human progress by enabling the world's soon-to-be largest developer community with the possibilities of AI. India's developers, building with their copilot companion, will help save lives –… pic.twitter.com/pF6swqNlxP

— Thomas Dohmke (@ashtom) June 11, 2024

GitHub Constellation celebrates the best of the Indian developer community and provides a platform to connect on topics such as AI, collaboration, community, and security.

“GitHub is a very internal partner in what we are doing at Infosys. It brings tremendous value, letting developers focus on code, creating new features and functionalities, and innovating at the speed of thought,” said Naresh Choudhary, vice president and head of reuse and tools at Infosys.

“The GitHub advanced security features that we have been using, whether it is code scanning, secret scanning, or Dependabot, have all played a tremendous role in how we make code and deliver it to our customers as secure by design, built-in from Day 1 and not as an afterthought,” he explained.

We see generative AI play a critical role in all parts of the software development lifecycle; GitHub Copilot plays a crucial role in that. We have been on this Copilot journey for some time. We were early adopters, with 7,000 employees leveraging GitHub Copilot in the work that we do,” said DR Balakrishnan, EVP, service offering head, ECS, AI, and automation, Infosys.

GitHub x India

GitHub Copilot now allows users to code in Hindi as well. At Microsoft Build 2024, CEO Satya Nadella announced that developers can now program in their native languages, including Hindi.

“Think about it — every person can now start programming, whether it’s in Hindi, Brazilian, or Portuguese, and bring back the joy of coding in their native language,” said Nadella, emphasising that this would be available in the Copilot Workspace.

On his recent visit to the Microsoft India office, Dohmke said, “Together, GitHub and Microsoft will generate a groundswell of developers in India, building and deploying in natural language. India will rise in the age of AI—and we’re here to enable it.”

India currently has 13.2 million developers using GitHub, compared to approximately 20 million in the US. India also ranks second globally in the number of GenAI projects hosted on GitHub, following the US. The country hosts just under 30 million repositories.

In India, Axis Bank, HCLtech, and LTI Mindtree are some of the other customers of GitHub Copilot, apart from Infosys. Axis Bank was the first in the country to adopt Copilot for Microsoft 365 at enterprise scale with 300 users and has seen over 30% productivity gains in daily work.

Indian IT giant HCLTech developed a Copilot for Microsoft 365 plugin for Microsoft Teams to help software developers and managers streamline bug resolution. Meanwhile, LTIMindtree, an IT and consulting services company, created a Copilot for Microsoft 365 plugin for Teams to optimise staff management.

Dohmke recently posted a picture on X from his first visit to Bengaluru in 2008. “I love this country,” he wrote, adding that India is at the nexus of a monumental economic opportunity. “It is set to become the world’s largest developer community at the exact point in time when the age of AI (artificial intelligence) is taking off,” he said.

Although this is my first visit as GitHub CEO in India, this is not my first time here. I love this country. This is me in 2008, in Bengaluru — I haven’t aged at all 😝
India is at the nexus of monumental economic opportunity, as it is set to become the world’s largest developer… pic.twitter.com/Afhe6taG9S

— Thomas Dohmke (@ashtom) June 5, 2024

He further said that the next great AI startup will be as likely to come from Mumbai or Bengaluru as SF or Seattle. Ironically, last year, GitHub fired 85% of its Indian employees. Of the 216 employees, 183 were asked to leave, including the entire engineering team building GitHub.

Meanwhile, Indian developer Mufeed VH recently built something similar to GitHub Copilot—an open-source AI software engineer named Devika. It can understand human instructions, break them down into tasks, conduct research, and autonomously write code to achieve set objectives.

However, GitHub has been bullish on India to empower the developers in the country. As Dohmke sums it up, “The odds are ever in India’s favour to rise and win the age of AI.”

The post GitHub is Madly in Love with India’s Burgeoning Developer Ecosystem appeared first on AIM.

DSC Weekly 11 June 2024

Announcements

  • Cyberattacks are an unfortunate problem for digital business, targeting small companies to the largest enterprises. As digital infrastructure expands and more sensitive information is stored online, security risk management needs must go beyond prevention to ensure the organization has full visibility of their digital environments and can address incidents in real time. Join our upcoming summit to learn A Holistic Approach to Endpoint Detection and Response to get practical EDR strategies to bolster your security strategy. You’ll learn how the principle of least privilege, IoT security and telemetry helps protect your endpoints, and receive advice for using advanced forensics and AI-powered investigations to speed response times.
  • The current threat landscape is vast, forcing companies to detect and respond to increasingly sophisticated attacks that cause significant damage. With the proliferation of artificial intelligence (AI) and machine learning, experts are concerned threat actors will weaponize evolving technology for devastating attacks on businesses across the world. The supply chain remains a vulnerable target for attack due to the volume of information being transferred and the third parties involved. Cyber warfare is projected to threaten even more organizations, with nation state attacks making up a growing portion of threats. With all these risks circulated in cyber space, how can companies implement a solid detection and response program that keeps up? Register for the free Enabling Effective Threat Detection and Response summit to learn expert detection and response strategies to safeguard your enterprise from cyber threats.

Top Stories

  • Internet: Lessons for AI safety and alignment from pharmaceutical regulations
    June 10, 2024
    by David Stephen
    The pharmaceutical industry is among the most regulated industries in the United States for the safety and efficacy of therapies. Yet, there are other approaches at the source, aside from regulatory efforts. Narcan (naloxone) is a medication that reverses opioid overdose, in an example of using drugs to fight the effect of drugs.
  • Generating the AI Dividend: Transforming Society’s Economic Value Curve
    June 9, 2024
    by Bill Schmarzo
    In my previous blog, ‘AI Dividend, Universal Basic Income, and Economic Multiplier Effect,’ I explored how Artificial Intelligence (AI) can create an AI Dividend that yields staggering economic benefits by intelligently automating routine tasks, optimizing decision-making processes and fostering innovation across all sectors of society.
  • Orange County Department of Education has the AI juice
    June 10, 2024
    by Dan Wilson
    Explore how AI is revolutionizing education in Episode 10 of the AI Think Tank Podcast. Join host Dan Wilson with guests Wes Kreisel and Kunal Dalal from Orange County Department of Education as they discuss AI’s transformative impact, overcoming challenges, and fostering student leadership.

In-Depth

  • Key Trends in Intelligent Automation: From AI-Augmented to Cognitive
    June 11, 2024
    by Alaa Mahjoub, M.Sc. Eng.
    Intelligent automation is advancing rapidly by integrating AI augmentation, autonomy, autonomic, and cognitive capabilities into automation systems. Each capability represents a different level of sophistication in how Artificial Intelligence (AI) interacts with human activity and the surrounding environment. Intelligent automation evolved from basic rule-based systems to incorporate sophisticated machine-learning algorithms.
  • From chaos to creation: How data labeling drives success in generative AI
    June 11, 2024
    by Matthew McMullen
    Training data is the cornerstone of AI algorithms – the quality of outputs is contingent on the data the AI model was trained on. The data determines the success of AI models, underscoring the importance of data labeling. Data labeling plays a crucial role in generative AI by providing context and meaning to the data used for training machine learning algorithms, enabling them to generate more meaningful outputs.
  • Blockchain solutions for intelligent transportation system
    June 6, 2024
    by Manoj Kumar
    The transportation system is the most important system that connects worlds and is also very crucial in the transfers of goods, products, logistics, etc. Many aspects of transportation play their role to transport different things and carry people from one place to another. Aspects such as paperwork, fleet management, traffic management, supply chain, database management, etc have made a big impact on the transportation system.
  • The critical role of data cleaning
    June 6, 2024
    by Lukas Racickas
    As a product manager, I have closely worked with data engineering teams and witnessed the fantastic ways to transform raw web data into insights, products, data models, and more. Data cleaning consistently stands out as a vital component. In this article, we’ll delve into the role that data cleaning, also referred to as data cleansing or scrubbing, plays within the data processing chain and its contribution to the success of utilizing the potential of web data to the fullest.
  • From data hoard to action hero: Mastering data activation
    June 5, 2024
    by Erika Balla
    In today’s data-driven world, organizations collect information at an unprecedented rate. Customer behavior, website interactions, social media engagement – the list goes on. But here’s the catch data itself isn’t inherently valuable. It’s what you do with it that unlocks its true potential. This is where data activation comes in.
  • DSC Weekly 4 June 2024
    June 4, 2024
    by Scott Thompson
    Read more of the top articles from the Data Science Central community.