Apple’s ‘Big AI Plans’ Coming Soon

Over the last decade, Apple has spent around $700 billion on stock buybacks, and with this money, the Cupertino tech giant could have bought Tesla, Rivian, Lucid, OpenAI and Anthropic and become a dominant force in both EVs and AI.

Most recently, Apple unveiled a $110 billion share buyback programme after its Q2 profit and revenue dropped. As a result, its stock surged 12% in after-hours trading as CEO Tim Cook predicted sales growth with upcoming AI-driven features.

The buyback initiative rewards its investors and aligns Apple with other tech giants amid concerns over rising generative AI investment.

It also enables Apple to manage its capital structure effectively, returning capital to shareholders without ongoing dividend commitments, especially amid economic uncertainty.

Additionally, this buyback signals to the market that Apple’s management perceives its stock as undervalued, potentially boosting investor confidence and attracting more buyers.

Apple’s EV ambitions come to a halt

After a decade of effort, Apple reportedly ended its ambitious Project Titan, halting the development of its electric car in favour of shifting focus to generative AI initiatives.

Lately, Apple has been investing heavily in expanding its AI capabilities. This focus is evident in its development of proprietary AI models like the OpenELM model, designed to run on Apple devices.

The model is particularly notable for optimising deep neural network layers to enhance efficiency. Apple’s use of a neural net with just 1.3 billion parameters stands in contrast to models like GPT-4 and Google’s Gemini.

Apple GenAI Dreams

At the latest earnings call, Cook said, the recent quarter was thrilling, as we launched Apple Vision Pro to show the world the potential that spatial computing unlocks.

“This is just the beginning, we’re also making exciting product announcements and sharing more about the vision for generative AI in coming weeks and months,” he added.

Cook is all set to unveil significant upgrades at the iPad Let Loose event on May 7 and tease new AI capabilities.

Apple has been making waves lately with its focus on integrating generative AI capabilities into its devices. In the past few months, it has introduced its latest creations, the MM1 models—a new series of multimodal AI models with an impressive 30 billion parameters, alongside ReALM, which integrates text and images to enhance its understanding and responsiveness to prompts.

Apple also introduced ‘Ferret-UI,’ a multimodal AI model designed to execute precise tasks concerning user interface screens while interpreting and acting upon open-ended language instructions. This advancement suggests a future where verbal commands may seamlessly supplant traditional finger gestures for iPhone navigation.

In addition, another research paper unveils Keyframer, a tool purportedly capable of generating animations from static images, alongside an AI model tailored for image editing.

Siri will be powered by Gen AI

At the same time, Apple has been in talks with various tech giants, including OpenAI and Google, to integrate generative AI into its upcoming iPhone series. Recently, Apple has rekindled discussions with OpenAI to integrate AI functionalities into iOS 18 (iPhone 16).

Additionally, in 2023, reports indicated that Apple invested significantly in research and development to improve Siri’s conversational skills. Integrating GPT-like technology into Apple’s infrastructure would essentially improve Siri.

The primary capabilities of a GenAI-driven Siri will probably stem from Apple’s in-house models operating on the device, while supplementary functionalities such as generating images and crafting long-form text may be sourced from third-party entities like OpenAI, Baidu, or Google.

Furthermore, Apple is also experimenting with an internal chatbot. The upgraded Siri boasts natural conversation abilities and enhanced user personalisation. According to Mark Gurman, a significant change will be removing ‘Hey’ from ‘Hey Siri.’

What’s next?
At last year’s WWDC, Tim Cook and other Apple executives did not mention the term ‘artificial intelligence’ even once. But, at the recent earnings call, everything changed — Cook said that he is optimistic about its “opportunities in generative AI” and is “making significant investments,” hinting at ‘big AI plans’ coming at the WWDC, next month.

The post Apple’s ‘Big AI Plans’ Coming Soon appeared first on Analytics India Magazine.

Sam Altman Says AI Systems Could ‘Testify’ Against Individuals, Akin to Being Subpoenaed in Court

Sam Altman is Not Coming Back as the CEO of OpenAI

In a future where each individual has access to their own personalised AI companion, OpenAI CEO Sam Altman believes we would need to redefine “privileged information.”

During a recent interview with MIT president Sally Kornbluth, Altman spoke on the privacy versus utility debate and where the tradeoff might be set in the near future. In particular, he proposed the hypothetical of a personalised AI that has access to nearly all of an individual’s personal data and whether it could potentially be used as evidence against in a court of law.

“That would be super helpful. But you can also imagine the privacy concerns that would present. How are we going to navigate the privacy versus utility versus safety tradeoffs or security tradeoffs that come with that? Do we need a new definition of ‘privileged information’ so that your AI companion never has to testify against you or can’t be subpoenaed by a court?” he asked.

Of course, Altman points out the advantages one could have on letting an AI train on their entire lives. But exactly where people will put limits on the tradeoff between privacy and utility, he says, will be a new thing for society to navigate.

Further, he implied that the current privacy debates were only the tip of the iceberg, with conversations on data privacy having the potential to go far when it comes to AI.

“There are all these things that we’ve had to negotiate with the internet, things about how we think about privacy, how we think about online ads, that when you intersect them with AI, become much higher stakes and much bigger trade-offs,” he said.

Interestingly, he also pointed out that these issues could be sorted out sooner if a way to separate reasoning engines from the data needed to train them is found.

“The only way we currently know how to do that (make a reasoning engine) is by training them on tons of data. I think we’ll look back and say that was a weird waste of resources because though it is true that GPT-4 can act as a database, it’s slow and expensive and doesn’t work very well,” he said.

However, if developers manage to separate the two so that the engine itself doesn’t store the data it is trained on, the debate surrounding the issue of privacy could be eased.

The post Sam Altman Says AI Systems Could ‘Testify’ Against Individuals, Akin to Being Subpoenaed in Court appeared first on Analytics India Magazine.

Pure Storage and Red Hat Accelerate Modern Virtualisation Adoption

Pure Storage and Red Hat today announced an optimisation for Portworx by Pure Storage on Red Hat OpenShift to enable streamlined integration and provide enterprises with a more seamless path to modern virtualisation.

The collaboration aims to deliver a single platform for deploying, scaling, and managing modern applications and a single control plane for virtual machines (VMs) and containers.

The optimisation comes as enterprises increasingly move applications to containers to speed and scale deployment while still significantly invested in large traditional application footprints that run in VMs. Supporting multiple platforms based on both VMs and containers can be cumbersome and expensive, often exacerbated by the need to re-architect VM-based applications for compatibility with modern frameworks.

Portworx by Pure Storage and Red Hat OpenShift, through Red Hat OpenShift Virtualisation, supports both containers and VMs, enabling customers to standardise end-to-end application modernisation at scale. With this latest optimisation and co-development, enterprises can run traditional virtualised applications side-by-side with modern containerised applications, streamlining operations, reducing costs, and bringing the entire application development process together.

The integration brings enterprise-grade data management capabilities to Red Hat OpenShift customers, providing developers with a single, self-service portal to build, test, and deploy applications, with integrated support for storage, data, and the entire application lifecycle management. Benefits include faster time to market with enhanced operational efficiency, simplified and more consistent development and management, and flexibility to deploy VMs and containers anywhere.

“Enabling integration for Portworx on Red Hat OpenShift represents a pivotal advancement in modern IT infrastructure representing the modern storage and compute building blocks, respectively,” said Murli Thirumale, GM, Portworx by Pure Storage. “Our collaboration with Red Hat not only accelerates application development and deployment but helps drive enterprise reliability and operational flexibility across complex, hybrid cloud environments.”

“Portworx offers enterprise-class storage capabilities such as high availability, performance, backup and disaster for both VMs and containers running in Red Hat OpenShift,” said Mike Barrett, vice president and general manager of Hybrid Platforms, Red Hat. “Together, we are providing a powerful solution for enterprises seeking to modernise their application development without the complexities of managing disparate development platforms.”

According to a recent survey, more than 4 out of 5 (81%) of data management stakeholders are planning to modernise or migrate existing VM workloads to cloud-native, with 79% citing operational simplicity as a key driver for these plans. The integration of Portworx and Red Hat OpenShift arms enterprises with the tools they need to seamlessly integrate containers and VMs on a unified infrastructure while driving efficiency, agility, and significant cost savings.

The post Pure Storage and Red Hat Accelerate Modern Virtualisation Adoption appeared first on Analytics India Magazine.

India urges political parties to avoid using deepfakes in election campaigns

India urges political parties to avoid using deepfakes in election campaigns Manish Singh 8 hours

India’s Election Commission has issued an advisory to all political parties, urging them to refrain from using deepfakes and other forms of misinformation in their social media posts during the country’s ongoing general elections. The move comes after the constitutional body faced criticism for not doing enough to combat such campaigns in the world’s most populous nation.

The advisory, released on Monday (PDF), requires political parties to remove any deepfake audio or video within three hours of becoming aware of its existence. Parties are also advised to identify and warn the individuals responsible for creating the manipulated content. The Election Commission’s action follows a Delhi High Court order asking the body to resolve the matter after the issue was raised in a petition.

India, home to over 1.5 billion people, began its general elections on April 19, with the voting process set to conclude on June 1. The election has already been marred by controversies surrounding the use of deepfakes and misinformation.

Prime Minister Narendra Modi complained late last month about the use of fake voices to purportedly show leaders making statements they had “never even thought of,” alleging that this was part of a conspiracy designed to sow tension in society.

The Indian police have arrested at least six people from the social media teams of the Indian National Congress, the nation’s top opposition party, for circulating a fake video showing Home Minister Amit Shah making statements he claims he never made.

India has been grappling about the use and spread of deepfakes for several months now. Ashwini Vaishnaw, India’s IT Minister, met large social media companies, including Meta and Google, in November, and “reached a consensus” that regulation was needed to better combat the spread of deepfake videos as well as apps that facilitate their creation.

Another IT Minister in January warned tech companies of severe penalties, including bans, if they failed to take active measures against deepfake videos. The nation is yet to codify its draft regulation on deepfakes into law.

The Election Commission said on Monday it has been “repeatedly directing” the political parties and their leaders to “maintain decorum and utmost restraint in public campaigning.”

Tata Electronics Begins Exporting Semiconductor Chips from Bengaluru R&D Centre

Semiconductor Deal in India

Tata Electronics Ltd has commenced exporting limited quantities of semiconductor chips packaged at its pilot line in Bengaluru-based research and development centre, according to people familiar with the matter. The packaged chips are being shipped to some of Tata Electronics’ partners in Japan, the US, and Europe.

Groundwork for New Chip Packaging Unit and Foundry

This development comes as the Tata group company prepares for its new chip packaging unit at Morigaon in Assam and a $10 billion chip foundry at Dholera in Gujarat. The company is also in the near-final stages for a successful tape-out of semiconductor chips in various nodes, including 28, 40, 55, and 65 nm.

Expanding Customer Base and Strategic Partnerships

Tata Electronics is expanding its customer base and has multiple partners for the packaged chips, which are still in the pilot stage. As reported in April, the company has also signed a strategic deal with Tesla to supply semiconductor chips for its global operations.

Importance of Demonstrating Capabilities

Neil Shah, vice president at Counterpoint Research, emphasized the importance of Tata Group demonstrating its chip designing and manufacturing capabilities to potential customers and partners before the fabs are commissioned over the next 30-36 months. This could be achieved through leveraging Tata Elxsi’s partnerships with Renesas and Lattice Semi and its in-house Sankhya Labs branch.

Tata Group’s Chip Foundry and OSAT Unit

Tata Group’s chip foundry in Dholera, built in alliance with Taiwan’s PowerchipSemiconductor Manufacturing Corporation (PSMC), has a planned capacity of up to 50,000 monthly wafer starts. The facility is expected to produce chips in leading nodes such as 28 nm and 40 nm and some legacy or mature nodes. Apart from the foundry, Tata Group also works on an outsourced assembly and testing (OSAT) unit in Assam.

The post Tata Electronics Begins Exporting Semiconductor Chips from Bengaluru R&D Centre appeared first on Analytics India Magazine.

Happiest Minds Targets $1 Billion Revenue by 2031 with Generative AI

Happiest Minds ends FY-23 Q1 on a happy note

Ashok Soota, the executive chairman of Happiest Minds Technologies, announced during the earnings call of quarter one that the establishment of the Generative AI business unit, along with the creation of six new industry groups and the successful acquisition of PureSoftware Technologies and Macmillan Learning are key steps toward achieving the company’s goal of $1 billion in revenue by FY31.

It reported a 1.4% quarter-over-quarter and a 9.5% year-over-year growth in constant currency revenue for the quarter ending on March 31, 2024.

“Our efforts around generative AI I are really paying off, and we have a very, very good team that’s been put in place whether it’s at the leadership level technology, we have our sales for this business unit that have come on board, working closely with the rest of the sales team and to get new customer logos,” said Venkatraman Narayanan, managing director and CFO at Happiest Minds Technologies.

He further added that it currently has 14 customers and numerous ongoing discussions on the same. Its overall client base has grown to 250 this quarter.

Sridhar Mantha, president and CEO of Generative AI Business Services at Happiest Minds noted that technology adoption of the company typically starts with small use cases in the first year and progresses to multi-million dollar engagements by the second or third year. Although the main objective is to increase revenue through significant use cases, initial projects typically concentrate on cost optimisation.

“All I think it’s important to say that we think it’s a transformational opportunity. And I believe we’re taking more advantage than anybody else in the businesses,” added Soota.

The company developed AI-enhanced solutions for the travel and transportation sector to improve ticket management and for the CPG sector to assist legal teams with supplier contracts, both enhancing efficiency and communication.

“We just want to add one more thing: What we are talking to our customers, we are also implementing in our own organisation,” said Rajiv Shah, president and member of the Executive Board.

In many customer engagements, the team has begun to leverage tools like code profilers to enhance productivity, and we are training all on AI co-pilots and similar technologies. These developments are being integrated into Happiest Minds’ hiring strategies and the planning of projects and engagements with customers. It is aiming for a 75% growth in headcount.

Read more: Data Science Hiring Process at Happiest Minds

The post Happiest Minds Targets $1 Billion Revenue by 2031 with Generative AI appeared first on Analytics India Magazine.

How SuperKalam Uses OpenAI GPTs to Fuel UPSC Aspirants 

As the UPSC Civil Services Exam (CSE) date of June 16th draws closer, over 11 lakh aspirants are diligently preparing to tackle one of the world’s most challenging examinations. SuperKalam, an AI platform is empowering students with personalised learning experiences, paving the way for their success in cracking the UPSC CSE.

Vimal Singh Rathore along with Aseem Gupta, both of whom previously founded qoohoo, started SuperKalam in July last year.

Vimal who cleared his UPSC in Central Armed Police Forces Exam (CAPF) exam but went on to work at Unacademy and start his own company, Coursavy.

The duo recognised the need for a more adaptable and student-centric approach to UPSC preparation. With his extensive background in education and entrepreneurship, Rathore set out to create a platform that could bridge the gap between traditional coaching methods and the unique requirements of each aspirant.

In an exclusive interview with AIM, Rathore shared his insights on the platform’s genesis and its mission to revolutionise UPSC preparation.

“The inspiration is pretty simple,” he explained. “There are three examinations in India which kind of change the trajectory of any student or any family’s future or life, which are UPSC, that is for the civil services, NEET to become a doctor, and JEE to become an engineer. These examinations cumulatively are given by 5 million aspirants every year.”

Rathore further emphasised the need for a more personalised approach to learning for these exams. He stated, “The entire responsibility of understanding, asking questions, whether they are grasping or not, is completely on the student. The platform somehow, because of this kind of process, was not accountable. They were unable to do a lot about how students can improve their learning outcomes.”

Leveraging OpenAI’s GPTs

SuperKalam leverages multiple models, including Llama, OpenAI’s GPT-4, and GPT-3.5, to deliver personalised learning experiences. For generating MCQs, Super Kalam relies on OpenAI GPT-3.5, while reasoning-based problems are tackled using OpenAI GPT-4 Turbo.

Super Kalam assists students in creating personalised timetables, sending daily targets, and tracking progress throughout their learning journey. It helps resolve doubts, aids in mastering concepts, and can evaluate a UPSC student’s handwritten answer in less than 60 seconds—a process that typically takes 2-4 weeks in the current education market.

The platform also conducts mock tests, identifies students’ strengths and weaknesses, and provides progress reports that foster self-awareness, unlocking their full potential.

Rathore revealed, “There are at least 14 Copilots that we are using to understand what you are asking Super Kalam.” The platform also uses fine-tuning and prompt engineering techniques to enhance the accuracy and relevance of the generated content.

SuperKalam has invested in NVIDIA GPUs and cloud infrastructure to ensure scalability and cost-efficiency, although they declined to mention how many.

“We did have GPUs from NVIDIA, but not like hardware as such inside our office. It’s our own cloud,” Rathore mentioned. As the platform’s user base grows, the team is exploring partnerships with local cloud service providers to optimise costs and performance.

Aspiring to become SuperKalam

Last year the founders were a part of the Y Combinator W23 batch and the company is backed by Gustaf Alstromer, a partner at Y Combinator. Rathore brings his experience as a founding team member and growth leader at Unacademy, while Gupta, the CTO, previously worked in the early engineering team at Razorpay.

Since its inception, Super Kalam has witnessed remarkable growth. The platform currently boasts a user base of over 46,000 students, with an impressive week 12 retention rate of 78%.

Rathore proudly shared, “We launched this latest version of the product on 15th of March. So 355 students who attempted any number of questions that day, the total number of questions that you are seeing, is different for every student.”

The impact of Super Kalam’s AI-driven approach is evident in the success stories of its users. Rathore highlighted the journey of Navya, a UPSC aspirant who struggled with consistency and self-doubt.

“Navya’s accuracy rate was 56% at 45 questions. And here, Navya is number one ranker, with 234 questions and 84% accuracy,” he beamed.

SuperKalam has set its sights beyond UPSC and aims to scale to cater to students attempting other competitive exams. “At the same time, our mission is to make quality education accessible and affordable,” Vimal concluded.

The post How SuperKalam Uses OpenAI GPTs to Fuel UPSC Aspirants appeared first on Analytics India Magazine.

NVIDIA is Hiring AI Engineers in India

NVIDIA is hiring experienced AI engineers in India to join its partner companies. The selected candidates will join the NVIDIA Partner Network as employees and will be responsible for driving the adoption of NVIDIA technology and securing innovative design wins in Data Center, Edge, and Cloud Deployments.

Interested candidates can apply here.

The roles are based in Bangalore and New Delhi and focus on the position of Deep Learning Solutions Architect.

Requirements for the role include a degree in Engineering (BE/B.Tech/MS/MTech), preferably in CS/IT/Electrical/Electronics or equivalent. Candidates must have a comprehensive understanding and hands-on experience in NLP, Deep Learning, and Machine Learning, with optional experience in LLM, Generative AI, and RAG workflows.

Proficiency in coding using Python, PyTorch, and TensorFlow is essential, with a bonus for experience in C++ and CUDA. Candidates should also possess a strong grasp of associated software architecture and frameworks, along with effective communication and presentation skills, and the ability to multitask in a dynamic environment.

The company is seeking candidates with a proven track record of 2-5 years in writing code in Python, PyTorch, TensorFlow. Experience with C++ and CUDA is a bonus. The ideal candidate will have a thorough understanding of associated software architecture and frameworks.

Standout candidates may have a background in customer-facing roles, development experience with NVIDIA software platforms and GPUs, and knowledge of MLOps technologies such as Docker/containers, Kubernetes, and data center deployments.

The post NVIDIA is Hiring AI Engineers in India appeared first on Analytics India Magazine.

The Kendrick-Drake feud shows how technology is changing rap battles

The Kendrick-Drake feud shows how technology is changing rap battles

Kendrick Lamar won the most tech-savvy rap battle to date

Dominic-Madori Davis Amanda Silberling 11 hours

It seems we’re all in agreement: Kendrick Lamar defeated Drake in one of the most engrossing rap battles of the decade. To add insult to injury, Drake also threw himself into legal hot water when he deepfaked the late rapper Tupac.

The tension between Lamar and Drake goes back decades, but this latest flare-up began last fall when J. Cole dropped a song calling Drake, Lamar and himself the “Big Three” in rap. This March, Lamar finally responded, rejecting Cole’s assertion with a scathing verse that dissed him and Drake. The battle ignited, and soon, a legion of other hip-hop artists jumped in, releasing music and taking their sides against Drake.

The weeks-long dispute escalated into one of the most intense rap battles of the digital era. There were side battles (between Chris Brown and Quavo) and white flags (J. Cole apologized to Lamar and deleted his diss response to the rapper). Meanwhile, social media-created campaigns and giveaways against Drake, and support for diss tracks against him appeared in everything from Japanese rap to Indian classical dance.

The feud has also sparked a conversation about technology’s increased role in rap beefs, in addition to how and when AI should be used in music.

A pivotal moment came on the track “Taylor Made,” where Drake attempted to diss Lamar using AI vocals from Snoop Dogg and Tupac, a rap icon who was killed decades ago. Drake did not get permission from Tupac’s estate to use the late rapper’s vocals and was threatened with a lawsuit unless he removed the track. Even though Drake took it down, his decision to use AI vocals promoted discussion among music lovers and techies alike.

(Lamar and Drake could not be reached for comment by the time of publication.)

Rap battles have turned chronically online

An artist like Tupac, who died in 1996, couldn’t have imagined that artificial intelligence could emulate his voice so convincingly that one of the most popular rappers of the moment would insert it into a song. He also couldn’t have understood how the nature of the social internet would shape the future of music, where “every stream is a vote.”

In the early aughts, rappers had to funnel their diss tracks through radio, releasing physical albums and mixtapes while giving interviews throughout the years of a feud. Responding to a diss could take days at the most, whereas today, it could take mere seconds.

Lamar released a diss response to Drake within 20 minutes of Drake dropping his track against Lamar. Lamar insinuated there were leaks in Drake’s camp that made it possible for him to drop so fast, and that’s a diss in itself. Before the internet was so ubiquitous, that speed would have been impossible.

Drake’s response to his feud with Meek Mill nearly 10 years ago saw him release two songs within four days. But Lamar dropped four songs within five days during this battle, including two in one day. Nobody had to rush out to buy CDs or pull over their cars to listen to the radio, as one founder recalled doing during Jay-Z’s infamous feud with Nas. Instead, tracks were quickly dropped on YouTube, shared on Twitter, and then streamed on Spotify en loop.

The speed of these releases does have its downsides: In another viral moment, Lamar confused actor Haley Joel Osment and televangelist Joel Osteen in his lyrics.

Fans have also called Drake “chronically online” during the rap battle, since their real-time posts about the raps seemed to influence him. Some fans accused him of referencing popular tweets and memes people made about him during the feud, then passing them off as his own thoughts and rapping about them. Numerous people online commented that it felt like Drake was writing his responses specifically for his fans to hear, rather than to respond to Lamar. That nearly instantaneous feedback loop stood in stark contrast to Lamar’s raps, which were poignant in their attacks solely against Drake.

This battle is also perhaps the first time such beef has expanded to tech platforms on a wide scale. Lamar fans used Google Maps to virtually vandalize Drake’s mansion, renaming it “Owned by Kendrick.” Streamers pulled long hours on platforms like Twitch, YouTube and Kick, waiting to see if they could be among the first to react to a newly dropped song.

Anthony Fantano, a popular music YouTuber, published no less than six different live reaction videos responding to Drake’s and Lamar’s songs dropped over the last two weeks. These kinds of reaction videos became so popular that creators are saying that Lamar (or his team) removed copyright restrictions from these songs, meaning they can profit from their videos. This move alone could give more meaning to the role of hip-hop reaction pundit.

AI has entered the chat

The Kendrick-Drake feud is also the first mainstream rap battle to use AI.

Artists across genres are reckoning with the coexisting threat and potential of this technology. Some have embraced AI as an opportunity: The art pop duo Yacht trained an AI on 14 years of their music to create the record “Chain Tripping” in 2019; Holly Herndon and Grimes have both developed tools for other artists to generate AI deepfakes using their voices. Other artists like Billie Eilish, Nicki Minaj and Katy Perry have protested against the use of AI to undermine human creativity.

Consent is a primary concern in artists’ debates about AI-generated music. Artists care so much about what their peers are doing because the use of AI implicates them all — unbeknown to them, their music might be used to train an AI model that another artist is using to supplement their music.

While Herndon is at the forefront of musical experimentation with AI, she also advocates for artists to retain control over their work. She uses AI in her art, but she is also a founder of Spawning, a startup that creates tools for artists that help them remove their work from popular AI training datasets. Meanwhile, chillwave musician Washed Out just released a controversial music video made entirely using Open AI’s Sora, a text-to-video model that has not yet been released to the public.

Tupac’s estate would argue that Drake crossed a line because he didn’t have consent to emulate the late rapper. But Rich Fortune, the co-founder of AI-powered social planning app Hangtight, said it was it creative that Drake was one of the first artists to use AI in a song, especially on a diss track. Fortune says, “There aren’t any rules in a battle.”

“If there were any time to see what the reaction would be, it would be now because punches aren’t pulled when at war,” he continued. He thinks that more artists will now seek to use AI vocals since Drake, one of the biggest artists in the world, effectively sanctioned its use.

In fact, one diss track against Drake in this feud used AI-generated work, and has since turned into a meme against him. Producer Metro Boomin took an AI song called “BBL Drizzy” and sampled it onto a track that has become one of the rallying cries against the rapper.

Meanwhile, artists as big as Beyoncé have taken a stance against the increasing presence of AI. In one of the few public comments she’s made about her genre-bending album “Cowboy Carter,” Beyoncé said: “The more I see the world evolving, the more I felt a deeper connection to purity. With artificial intelligence and digital filters and programming, I wanted to go back to real instruments.”

Fortune said the biggest hurdle now for artists who want to use AI is just getting permission. Living artists might not be so keen to be AI replicated, but the estates of late musicians might be. The problem there is that many old-school artists who have died, like Tupac, can’t consent to being mimicked because AI-generated music was not a technology conceived before their deaths.

“I don’t know if that’s necessarily a good thing, but it’s the direction we’re headed,” Fortune said about using the work of late musicians. At the very least, he said, it opens up a new revenue source for the estates of the artists who don’t mind them being artificially reincarnated.

The Kendrick-Drake feud also unveiled another point about AI: Its potential ability to emulate artists with a less unique style. Luke Bailey, the founder of the fintech Neon Money Club, said Drake’s more recent music lacks depth. That, paired with the allegations that Drake was so directly and deliberately drawing inspiration from what he saw on the internet, raises the concern that he is doing something that an AI bot could one day do.

“There are two types of musicians: One who can play what someone tells her or him to play and one who can create something original from scratch,” Bailey said. “AI is the former at this stage in its development.”

Bailey is right. Large language models (LLMs), the type of artificial intelligence that powers most deepfake tools, are inherently uncreative. These models synthesize gigantic swaths of data and then respond to a user-generated prompt by predicting the most likely response.

But the most celebrated music often takes the opposite approach: Just look at Kendrick Lamar, a rapper whose bars are so complex that he remains the only non-classical and jazz musician to win a Pulitzer Prize. He’s often regarded as one of the foremost thinkers in music and is known for his commentary on race and politics. AI right now lacks the cultural nuance to form its own thoughts on society, not to mention something as nuanced as race.

“[AI] can’t copy Kendrick’s depth, only his voice,” Bailey said, adding that fans have heard pretty convincing AI-generated Drake songs in the past. “AI doesn’t have any potent bars yet.”

AI music generators could be a boon for artists — but also problematic

9 Innovative Use Cases of AI in Australian Businesses in 2024

More than three-quarters of Australian businesses are excited about the opportunities AI presents to them. A lack of AI talent and skills are holding some projects back, but we’re already seeing remarkably innovative uses of AI by enterprises that have been in a position to take an early mover advantage. AI is also becoming embedded across industries and sectors, resulting in incredible innovation that is very specific to that sector.

Here are notable examples of Australian businesses using AI in innovative ways.

1. BHP

One of Australia’s most important manufacturing companies, BHP, has adopted AI solutions to optimise the loading and offloading process for iron ore on rail trucks and then loading onto ships for export.

This “machine vision technology,” is important, because unexpected surges in the volume of iron ore being loaded or unloaded can cause spillages and damage to the transport infrastructure. With the support of AI for monitoring loads, BHP is now operating more safely and had a 105,000 tonne production saving in just one area of its operations for the 2022-2023 financial year.

2. Telstra

Australia’s largest telecommunications company uses AI in customer service to improve product recommendations and help customers get to the outcomes they’re looking for more quickly.

Telstra also uses AI in its own networks and systems to help identify potential issues and flag cyber threats. As with any telco, Telstra is a particularly attractive target for cyber crime, so having the kind of real-time monitoring and response enabled by AI is particularly important.

3. Local council

AI has applications in the public sector, too. One local council (which remained anonymous) partnered with a local AI company to deploy surveillance tools that would monitor school zones for traffic infringements and compliance.

As a result of this deployment, the council was able to increase its school zone patrolling range and rate by 900%. Automation also supported the submission and follow-up for infringement claims and in being automated and driven by technology, there was no risk of bias. The end result has been better compliance enforcement and, more importantly, safer school zones for the children.

4. Treasury Wine Estates

As an agricultural business, Treasury Wine Estates is heavily impacted by weather and climate events. Vineyards are particularly susceptible to everything from frosts to smoke from bushfires; even if the plants are spared, the crop can have its flavours destroyed and the grapes become effectively unusable for wine making.

Treasury Wine uses climate data and an AI algorithm to make its own forecasts based on what will impact its crops. In addition to being able to prepare for frosts and fires, this AI system helps optimise water use by calculating the exact amount required on a vine-by-vine basis.

5. National parks management of Kakadu

One of the more complex national parks to manage in Australia is Kakadu in the Northern Territory. It is one of Australia’s largest national parks, and also one where conditions can be challenging to work in, with temperatures regularly exceeding 40 degrees, and humidity in excess of 60%.

To address this challenge, CSIRO worked with Microsoft and the Kakadu Rangers to develop a system for drones to take large numbers of photos rapidly and then a software solution to help analyse the data and monitor the ecosystems across the national park. This application of AI was directly linked to the restoration of a large colony of magpie geese to wetlands that had previously been choked by weeds.

6. Royal Perth Hospital

Royal Perth Hospital has embraced a solution called HIVE — Health in a Virtual Environment — to assist with ground staff by continually monitoring the condition of patients that need a close eye on them. The system keeps track of vitals, including heart rate, blood pressure and oxygen levels, and any anomalies are flagged immediately at the HIVE command centre, with staff able to instantly communicate with the nurses and doctors via audio-visual units.

This system ensures that patients have the best possible standard of care, while freeing staff up to move around and work efficiently, confident that critical patients are still being closely monitored.

7. Commonwealth Bank

CommBank has always been an enthusiastic adopter of technology, and it has taken a leadership position with AI applications and deployment. The bank is using AI to read, analyse and process customer documents more quickly, halving the time it takes to verify someone’s income to process a loan.

Internally, CBA is using generative AI to streamline internal processes, including coding. For example, the company has accepted nearly 80,000 lines of code that was recommended by GitHub Copilot, significantly increasing the speed with which the engineers can work.

8. Sydney Airport

Another valuable use of AI is to analyse data in real-time and use that to provide feedback. This is what Sydney Airport has done in collaboration with Google, after launching an AR-enhanced application that uses AI to scan tens of billions of images within the airport and understand where the user is at. From there, it can provide wayfinding assistance and help passengers locate gates, baggage claims, retail and food outlets, bathrooms and more.

In addition to saving passengers the need to find an information kiosk, the app is multi-lingual and designed to help overcome the language barriers in navigating the complexity of the enormously complex airport environments.

9. The National Pickleball League

If you haven’t heard of Pickleball yet, the chances are that you will in the near future. This is one of the fastest-growing sports worldwide — including in Australia — and part of what’s driving it is by embracing AI.

The National Pickleball League in Australia partnered with PlaySight, an AI-powered sports video and analysis company, to give NPL members the ability to record, livestream, analyse and share highlights and replays online. In addition to allowing for the automation of marketing content to support the sport, this platform is a useful analytics tool to improve the performance of players and the strength of competition in the league.

What Australian IT pros need to know

AI can be deeply embedded into the very processes that underpin every sector. Previously, professionals would use laptops, word processors, spreadsheets and sensors to collect and analyse data, but the technology was distinct from the work. With AI, there is the opportunity to embed technology within the work itself, and this is what the more forward-thinking and technically-ready organisations are embracing.

For IT professionals, it means the application of AI technology might be wildly different from one sector to the next. An IT professional working for a winery is going to have a totally different intended use of technology than one that is supporting Sydney Airport.

This, in turn, means IT professionals are going to need to develop a deep understanding of not just technology but also the sector and business they’re working within. As a result, we may see less movement across sectors from IT professionals looking for new work opportunities, and AI will be the catalyst for IT professionals to become even more deeply ingrained business enablers and leaders within their sectors.