Generating the AI Dividend: Transforming Society’s Economic Value Curve

Slide1-1

In my previous blog, ‘AI Dividend, Universal Basic Income, and Economic Multiplier Effect,’ I explored how Artificial Intelligence (AI) can create an AI Dividend that yields staggering economic benefits by intelligently automating routine tasks, optimizing decision-making processes and fostering innovation across all sectors of society. This blog expands on that discussion by providing a ‘How To’ guide for activating and realizing the AI Dividend[1].

The “AI Dividend” encapsulates the substantial and recurring economic benefits generated by advancements in artificial intelligence (AI), emphasizing the importance of harnessing these gains to improve societal well-being and economic equity.

This blog will expand the conversation by providing a “How To” guide concerning what we must do to activate and realize the AI Dividend.

Step 1: Understanding The Economic Value Challenge

The economic value curve quantifies the relationship between an organization’s resources invested and the resulting value generated. This curve illustrates how initial capital, labor, and technology investments yield significant increases in value and productivity (Figure 1).

Slide2-1

Figure 1: Economic Value Curve Challenge

However, as investments continue to rise, the incremental gains diminish, reflecting the Law of Diminishing Returns. After a certain point, each additional input unit contributes less to the output than the previous unit. This phenomenon imposes a critical challenge on economic growth, as it limits the potential for sustained increases in efficiency and productivity.

Step 2: Mastering Nanoeconomics

Nanoeconomics is the economic theory of individual entity (human or device) predicted behavioral and performance propensities (insights).

Nanoeconomics is a growing area of study within economics that focuses on measuring and codifying the behaviors and performance of individual entities such as consumers, patients, students, operators, and devices like compressors and motors. Unlike traditional macroeconomic and microeconomic approaches that deal with broader trends, nanoeconomics derives and drives value from the predicted propensities of the individual entities (Figure 2).

Slide3-1

Figure 2: Nanoeconomics

The critical difference between Nanoeconomics and traditional economic analysis lies in the scope and granularity of the data. While conventional economics often relies on averages and general trends, Nanoeconomics captures individual entities’ specific behaviors and performance characteristics, enabling precise actions that increase efficiency in resource allocation and operational execution.

Step 3: Leveraging AI to Create Entity-based Analytic Profiles

An Analytical Profile is an analytics-driven representation of a human or device/thing entity’s behaviors, preferences, and interactions, enabling more accurate, relevant, and meaningful predictions and personalized decision-making.

Using Generative AI (GenAI), we can create individual-entity Analytic Profiles by analyzing vast amounts of data to uncover and codify individual-level behaviors, preferences, interactions, patterns, and correlations. These insights create highly accurate models that predict future behaviors and preferences, facilitating more targeted and effective decision-making. For instance, a retailer could use GenAI to develop detailed customer profiles that predict purchasing habits and tailor marketing efforts to individual preferences, enhancing customer engagement and satisfaction (Figure 3).

Slide4-1

Figure 3: Analytic Profiles

Analytic Profiles are instrumental in transforming the economic value curve and enabling organizations to “do more with less.” For example,

  • Finance: Investor and financial instrument analytic profiles enable highly individualized financial products and personalized investment advice, delivering tailored financial solutions that maximize returns with minimized risk.
  • Healthcare: Patient and treatment analytic profiles enable personalized treatment plans and more accurate health outcome predictions, which improve the quality of care while significantly reducing overall healthcare costs by optimizing resource utilization and minimizing unnecessary treatments.
  • Retail: Customer and product analytic profiles optimize inventory management and pricing strategies that reduce inventory costs and enhance customer satisfaction through better availability and pricing accuracy.
  • Manufacturing: Analytic profiles of machinery and production lines facilitate granular predictive maintenance, optimize production schedules, and minimize downtime that increases operational efficiency, lowers costs, and extends equipment lifespan.
  • Hospitality: Guest analytic profiles enable personalized services that enhance guest experiences while reducing maintenance, management, and inventory costs.
  • Transportation: Vehicle and driver analytic profiles enable optimized routing, predictive maintenance, and enhanced safety measures while reducing fuel consumption, lowering emissions, and enhancing service reliability.
  • Energy: Energy production analytic profiles optimize grid management, improve demand forecasting, and offer personalized energy-saving recommendations to consumers that contribute to more efficient and sustainable energy use while reducing waste and operational costs.
  • Agriculture: Analytic profiles for crops and soil conditions enable farmers to customize irrigation schedules, fertilization usage, and pest control measures, resulting in higher crop yields and more sustainable farming practices while reducing costs associated with fertilizers, herbicides, pesticides, and water (Figure 4).
Slide5-1

Figure 4: Entity-based Precision Decisions to Optimize Use Cases

These industry examples highlight how GenAI-generated analytic profiles can drive significant efficiency gains and value creation while reducing costs and operational risks by applying micro-level entity predictive propensities to improve operational effectiveness and resource optimization.

Step 4: Transforming the Economic Value Curve

Organizations can significantly improve operational effectiveness and resource optimization by embracing Nanoeconomics and creating granular entity-level Analytic Profiles. This approach reshapes the traditional Economic Value Curve, often constrained by the Law of Diminishing Returns, into a more dynamic and flexible model that continuously adapts based on real-time data (Figure 5).

Slide6

Figure 5: Transforming the Economic Value Curve

Operationalizing and scaling AI, Nanoeconomics, and analytic profiles can empower organizations to achieve “Do More with Less” through granular, entity-based actions and decisions, including:

Resource Optimization:

  • Prioritize resources to high-value activities and opportunities: Allocate time, money, and staffing to the most impactful projects and initiatives.
  • Eliminate low-value activities and opportunities: Identify and remove tasks and processes that do not contribute to organizational goals.

Process Efficiency:

  • Reengineer to create dynamic operational processes: Develop intelligent workflows that automate or eliminate ineffective steps, ensuring streamlined and efficient processes.
  • Eliminate re-work: Implement systems and checks to complete tasks correctly the first time, reducing the need for corrections.

Quality and Reliability

  • Optimize matching specific offers and treatments to particular individuals: Use analytic profiles to tailor products and services to individual needs, improving satisfaction and engagement.
  • Replace or fix parts only when they need to be fixed: Implement predictive maintenance to ensure repairs are made only when necessary.
  • Fix products right the first time: Enhance quality control processes to ensure products meet rigorous performance standards before being deployed.
  • Improve supplier and vendor quality and reliability: Collaborate closely with suppliers to ensure they meet quality standards and delivery timelines.

Waste Reduction

  • Eliminate waste, shrinkage, and fraud: Detect behavioral and performance anomalies to prevent inefficiencies and dishonest activities.
  • Optimize just-in-time products and services fulfillment: Align production and delivery schedules closely with customer demand to minimize excess inventory and reduce storage costs.

Personalization and Adaptation

  • Expand hyper-personalization to drive more effective engagements: Utilize detailed customer profiles to provide highly personalized experiences that increase customer loyalty, product sales, and profits.
  • Learn and adapt more quickly to economic, market, cultural, political, technological, and societal changes: Use real-time analytics that can continuously learn and adapt to stay agile and responsive to changes in the external environment.

Summary: “Doing More with Less”

The “Doing More with Less” principle encapsulates how AI, nanoeconomics, and entity-level analytic profiles can drive greater efficiency and productivity, ultimately making the AI Dividend a tangible reality. AI technologies can uncover critical entity-level insights that lead to more effective decision-making and streamlined operations by optimizing resource allocation, re-engineering critical operational tasks, fine-tuning resource utilization, minimizing wasted effort, and scaling operational effectiveness. Coupling the AI Dividend with the Economic Multiplier Effect[2] can unlock unprecedented levels of value that can be used to promote a more efficient, sustainable, and prosperous future for all.

The only barrier hindering our ability to realize and reallocate the AI Dividend will be leadership’s fortitude to ensure that the focus is not just on amassing unprecedented levels of personal wealth but on improving the quality of life for everyone.

Remember, what we do in life echoes in eternity.

[1] “AI Dividend” concept is inspired by the “Peace Dividend’s” economic benefits. https://www.linkedin.com/events/shouldleadersinvestmoreindatama7204498853482348545/theater/

[2] The Economic Multiplier Effect occurs when an initial injection of spending (such as government investment or consumer spending) leads to a more significant overall increase in economic activity and national income due to successive rounds of re-spending by businesses and consumers.

I tested Minitailz’ AI-powered pet tracker, and it solved my biggest pain point as a dog owner

Invoxia Minitailz on Dog Collar

ZDNET's key takeaways

  • Invoxia's Minitailz Health and GPS Tracker retails for $99 on Amazon or the company website. An additional subscription cost is required.
  • The tracker helps eliminate the guesswork about your dog's whereabouts by tracking its location, activity, and even some biometrics.
  • The subscription cost to use the tracker is more expensive than the actual hardware, making it a long-term investment.

A year ago, I adopted my first dog, a Yorkshire Terrier named Jimmy. Little did I know that with the cute face and floppy years also came loads of parenting anxiety, made real by a $2,000 emergency vet bill. As a result, when I saw the Minitailz Health & GPS Tracker at CES 2024, I had to have it.

Also: 6 ways Apple can leapfrog OpenAI, Microsoft, and Google at WWDC 2024

Leveraging AI and other advanced tech, the gadget tracks your dog's location, activity, and biometrics, such as walking, playing, running, sleeping, and eating times and heart and respiratory resting rates. These capabilities earned it the "best innovation in the AI category" recognition at CES.

View at Amazon

In the box, you get Minitailz, a USB-C charger, and the ring to attach it to your dog's collar. To place it onto your dog's collar, you just slip it right through the opening. Then, just download the app, create an account, select a subscription of either $129.95 for one year or $229.95 for two years, and you are ready to go.

Also: Everything you need for a smart pet setup

To get the most out of your Minitailz, you need to let it gather as much information about your dog as possible, so it should be kept on your dog all day and night. To ensure that my experience with the gadget was as accurate and optimized as possible, I kept it on Jimmy's collar for over three months, and the information it gathered in that time was impressive.

As advertised, it presents all the health and activity insights in a way that is easy to read, access, and understand, as shown in the photos below. However, even though it's nice to have access to all this data, if I am being honest, I rarely look at it. My favorite feature of the app is the daily reports.

The reports present all the data collected in the past 24 hours in a fun, comprehensive way, adding necessary context where it can be helpful, such as comparing your dog's vitals to those of others in the Minitailz community. This is especially useful because, according to the company, a higher-than-normal resting respiratory rate can indicate impending heart failure, so having these insights into Jimmy's health gives me peace of mind.

As a crazy dog mom, this feature alone is worth the investment because it helps bridge the communication gap with my four-legged friend. It shows me things Jimmy can't express, such as whether he had enough playtime, walks, exercise, and a good night's sleep.

The second best feature is the activity notifications. When I write stories from the office, my partner works from home and cares for Jimmy. Instead of feeling like I am missing out, I get notifications on my phone that Jimmy went on a walk, got in a car, or is currently playing. This feature would be especially useful if you leave your pet at a doggy daycare or with a sitter to ensure everything is going according to schedule.

The only caveat is that sometimes it cannot accurately decipher whether Jimmy has the zoomies or is on an actual walk. This caused my partner to call me panicked once, thinking Jimmy had broken out of the apartment when, in reality, he was running around the living room playing with his toys.

Also: Exclusive interview with Raspberry Pi CEO: New $70 AI kit 'a watershed moment for us'

Another great feature for concerned dog parents is the GPS feature, which tracks the pet's location. Compared to the Apple Air Tag, it refreshes less quickly. However, the advantage is that it won't start beeping if you are away for too long. Another perk is that, unlike the AirTag, it is rechargeable, ensuring you control how much battery it has instead of just dying randomly.

The Minitailz's fantastic battery life is also a highlight. It was advertised as lasting two whole weeks, and in my experience, that is accurate. My favorite part is that it notifies you via emails and phone pushes that your pup's battery is running low, making it impossible to miss. I also shared my account login with my partner, which doubles the amount of people who have access to all of Jimmy's insights and needs.

ZDNET's buying advice

If you're a data aficionado and, as a result, a fan of wearables, this gadget is for you and your pet. With the Minitailz, you can access similar insights you get on your smartwatch or fitness tracker for your furry friend. Ultimately, this data helped me learn more about my pet's needs through the comprehensive health data. However, if you don't see the value of having a ton of metrics that aren't necessarily actionable because of the steep subscription cost, I would skip it.

Featured reviews

6 Incredible Ways AI is Helping Wildlife Conservation

6 Incredible Ways AI is Helping Wildlife Conservation

While biodiversity and wildlife may not immediately spring to mind when considering AI, conservation agencies have long employed a range of technologies to monitor and ensure the well-being of ecosystems and wildlife.

As per research, the market for AI in forestry and wildlife was estimated to be worth US $1.7 billion in 2023. It is projected to expand at a compound annual growth rate (CAGR) of 28.5% to reach US $16.2 billion by 2032.

Let’s look at some of the top use cases of AI in wildlife conservation.

AI-Powered Wildlife Monitoring

Conventional techniques frequently depended on manual observation, which was labour-intensive and liable to human mistake. AI-powered monitoring systems with cutting-edge sensors and cameras, help address this.
Real-time tracking, identification, and detection of animals by these technologies can gather information about their habitat preferences, and population dynamics. Large-scale datasets are analysed by machine learning algorithms, which allow researchers to derive meaningful insights.

.@RESOLVORG’s TrailGuard AI camera uses the @IntelMovidius Myriad 2 VPU to detect poachers in Africa’s wildlife reserves and alert park rangers before endangered animals are killed. https://t.co/iX3VMN3gK3 #intelAI pic.twitter.com/cZ82KnrTl0

— Intel AI (@IntelAI) January 3, 2019

For instance, wildlife officials track the movement of animals in the Kanha-Pench corridor in Madhya Pradesh using the TrailGuard AI camera-alert system.

It runs on-the-edge algorithms to detect tigers and poachers and transmit real-time images to designated authorities responsible for managing prominent tiger landscapes.

Guardians of the Wild

Many national parks have installed camera traps – or cameras with infrared sensors, deployed in forests to monitor the movement of potential poachers – that harness the power of AI.

Recently, Susanta Nanda, a wildlife enthusiast and an Indian Forest Service (IFS) officer, recently shared images of intruders captured by an AI-enabled camera at Similipal Tiger Reserve in Odisha on X. This quick response time, made possible by AI, not only helped apprehend intruders but also deterred potential poachers.


Indian Forest Officer Sushant Nanda. Source: X

AI-based surveillance systems will soon be equipped in elephant corridors across the country by the name Gajraj.

Species Identification


Using AI for camera detection. Image source: X/@ai_conservation

The Wildbook project uses AI in species identification. AI algorithms are used for identifying specific animals based on their distinct physical qualities, such as the pattern of spots on a giraffe or the form of a whale’s tail. The time and effort needed by scientists for species identification are greatly decreased by this automated method

Satellite Imagery to Track Endangered Wildlife

SilviaTerra (now known as NCX) creates comprehensive maps of woods by analysing satellite pictures. These maps offer important information about the kinds of trees found there, how well-maintained the forests are, and how much carbon they can store. To manage forests in a way that lessens the effects of climate change, this information is essential.

An Eagle’s Eye for The Wild

Traffic, a well-known non-governmental organisation that works on the worldwide trade in wild animals and plants, have created an AI programme that analyses internet data about the trade in wildlife.

The “AI Wildlife Trade Analyst” an AI tool can interpret enormous volumes of data from many internet sources, such as social media, online forums, and e-commerce platforms. Information about wildlife commerce, including species names, items, prices, and locations, is identified and categorised. The data is then utilised to produce insights regarding the trade’s scope, makeup, and patterns.

PATTERN, which was created with the aid of Microsoft Azure AI Custom Vision, is an end-to-end computer vision platform and AI service that offers a user-friendly interface for labelling photos.

Habitat Analysis

An example of the land-cover mapping work around part of the Chesapeake Bay. Image source: Microsoft

The High-Resolution Land Cover Project of the Chesapeake Conservancy used AI to produce a high-resolution map of the watershed of the Chesapeake Bay, which is roughly 100,000 square miles. Compared to traditional 30-metre resolution land cover data, the map’s one-metre resolution offers 900 times more information. It’s important to note that implementing AI technologies in wildlife conservation can be costly and may require significant technical expertise. Despite these challenges, AI’s benefits and potential applications in wildlife conservation are vast and promising.

The post 6 Incredible Ways AI is Helping Wildlife Conservation appeared first on AIM.

Meet The Indian Techies Who Turned Into Sports Stars

Indian-origin Saurabh Netravalkar, who represented the India under-19 team, and is now USA’s top cricketer, became an international sensation after winning the recent T20 World Cup match for USA against Pakistan. The interesting part: Netravalkar is the principal member of the technical staff at Oracle, where he has been working for eight years.

Before relocating to the United States in 2015, Netravalkar had a brief stint in Indian domestic cricket. He represented Mumbai in the prestigious Ranji Trophy and was part of the India U-19 team, alongside future cricket stars including KL Rahul, Mayank Agarwal, Harshal Patel, Jaydev Unadkat, and Sandeep Sharma. During the 2010 ICC U-19 World Cup, he emerged as India’s highest wicket-taker, securing nine wickets across six matches.

A graduate of the University of Mumbai and the renowned Cornell University, Netravalkar also co-founded CricDeCode, an app dedicated to cricket.

While social media is full of memes and tweets about the coder turned cricketer, we bring you a list of popular Indian sports personalities who also hold engineering degrees and once worked as techies.

Manasi Joshi

The Indian para-badminton player holds a degree in Electronics Engineering from K. J. Somaiya College of Engineering in Mumbai, and worked as a software engineer until a tragic accident in 2011 led to the amputation of her left leg.

Despite this setback, Joshi found solace in badminton, which she had played since she was six years old. She started playing para-badminton in 2012 and won a gold medal at the 2019 Para-Badminton World Championships in Switzerland, becoming the first Indian athlete to win a gold medal in the sport.

Shikha Pandey

Pandey holds a degree in Electronics and Electrical Engineering from the Goa College of Engineering and also served as an Indian Air Force officer.

After completing her engineering degree in 2010, Pandey was offered jobs by three multinational companies, but she declined all these placement offers and decided to take a year off and focus on her cricketing career.

Pandey represented Goa in domestic cricket and was part of the Indian Women’s Cricket team that won the 2017 ICC Women’s World Cup Qualifier. At the time of the 2020 ICC Women’s T20 World Cup, she held the rank of Squadron Leader.

Sathiyan Gnanasekaran

Gnanasekaran holds a degree in Information Technology from St. Joseph’s College of Engineering in Chennai, and has worked for companies like ONGC as a software engineer. He started playing table tennis as a hobby and was spotted by former Indian paddler Subramanian Raman, who encouraged him to pursue the sport seriously.

Gnanasekaran became the first Indian table tennis player to break into the World Top-25 ITTF rankings in May 2019, after attaining his career best World ranking of 24.

Ravichandran Ashwin

The famous Indian off-spinner pursued a B.Tech degree in Information Technology from SSN College of Engineering in Chennai, and worked as an engineer before turning to cricket.

Ashwin started playing cricket at the age of nine for YMCA and was coached by Chandrasekar Rao during the early part of his career. He represented the Indian under-17 team as an opening batter and later took up medium-pace bowling before switching to off-spin.

Akash Madhwal

The Mumbai Indians star of IPL 2023, pursued a degree in civil engineering from the College of Engineering Roorkee in Uttarakhand. Before turning to cricket, he worked as a practicing engineer. Madhwal made his domestic cricket debut for Uttarakhand in 2019 and has since taken 67 wickets in 56 professional matches across formats.

He joined the Mumbai Indians squad in 2022 as a replacement for the injured Suryakumar Yadav but did not get to play. However, in the 2023 IPL season, Madhwal seized his opportunity and delivered a record-breaking performance in the Eliminator match against Lucknow Super Giants.

Shikha Tandon

The renowned Indian swimmer did her B.Sc. in biotechnology, genetics, and biochemistry from Jain College, Bangalore, India in 2003.

Tandon represented India at the 2004 Athens Olympics, where she participated in the 50m and 100m freestyle events, becoming the first Indian swimmer to qualify for two separate events in an Olympic competition.

She has won 146 national medals and 36 international medals, including five gold medals. After retiring from competitive swimming in 2009, she moved to the USA to pursue a graduate course in bio-sciences.

Tandon worked with the United States Anti-Doping Agency (USADA) for over five years and is currently the Director of Global Partnerships at SVEXA, an exercise intelligence and sports analytics company.

Anil Kumble

The legendary Indian cricketer holds a degree in Mechanical Engineering from Rashtreeya Vidyalaya College of Engineering in Bangalore. He began his cricketing journey at a young age, playing for his school and later for the Karnataka State team. However, he did not give up his engineering career immediately.

Before turning to cricket full-time, Kumble worked as an engineer for a brief period. He even created a software package for the Indian cricket team in 1996, which was an extension of the scoring sheet to gather data for analysis.

Javagal Srinath, the former Indian fast bowler, and EAS Prasanna, the spin legend, also hold engineering degrees.

The post Meet The Indian Techies Who Turned Into Sports Stars appeared first on AIM.

KissanAI accepted for NVIDIA Inception Program

KissanAI, a pioneering startup in agricultural artificial intelligence, has been accepted into the prestigious NVIDIA Inception program. The program supports startups revolutionizing industries with advancements in AI and data science.

As the first company introduced a vernacular voice-based copilot application for farmers, KissanAI has been at the forefront of transforming agriculture through its Generative AI platform, AgriCopilot, and domain-specific Agri LLMs called Dhenu.

By joining NVIDIA Inception, KissanAI gains access to cutting-edge technology, expert mentorship, and collaboration opportunities to further enhance its intelligent farming solutions.

“We are thrilled to be part of NVIDIA Inception and leverage their resources to make our AI capabilities faster, more accurate, and impactful for farmers,” said Amol Patil, CEO of KissanAI. “This partnership will accelerate our mission to drive agricultural productivity and sustainability.”

Through the program, KissanAI will have early access to NVIDIA’s latest hardware and software innovations. The company will also benefit from NVIDIA’s global network of AI experts and marketing support to expand its reach.

Founded in 2020, KissanAI’s AgriCopilot platform deploys AI workflows for use cases such as advanced crop advisory, conversational commerce, and sales assistance.

Its Dhenu LLMs leverage a curated agricultural knowledge base to provide nuanced insights to farmers. The startup has already outpaced industry giants like Google and OpenAI in the agri-tech space.

The acceptance into NVIDIA Inception marks a significant milestone for KissanAI as it continues to push the boundaries of AI in agriculture. The company remains dedicated to developing innovative solutions that meet the evolving needs of farmers worldwide.

The post KissanAI accepted for NVIDIA Inception Program appeared first on AIM.

5 Innovative Ways Film Industry is Embracing AI 

5 Innovative Ways Film Industry is Embracing AI

Sathyaraj, renowned for his portrayal of Kattappa in the Baahubali series, graced the screens on June 7, 2024, in a young avatar! The Tamil movie ‘Weapon’ pioneers the use of AI to recreate a younger version of the 68-year-old actor.

In another interesting update, India’s Intelliflicks Studios, in Chandigarh, is set to produce the world’s first full-length AI-generated film titled ‘Maharaja in Denims’ based on a 2014 novel by Indian author Khushwant Singh Pall, a former Microsoft VP.

From casting decisions to predicting box office success, AI is transforming filmmaking. Let’s explore how.

Casting Actors

AI can speed up casting procedures by autonomously conducting auditions. As per the standard criteria and textual image descriptions, AI platforms look for actors in the database.

When fed with a large amount of data detailing the facial features of actors and emotions, the algorithm can be used to overlay the digital face of the actor on a body double to retain the natural expressions of the original performer.

Furthermore, filmmakers apply AI to create diverse digital characters, for instance, the creation of Thanos in ‘Avengers: Infinity War’.

Interestingly, late actor James Dean may be brought back to life in a movie titled ‘Back to Eden’, by actor cloning technique. Using AI technology, a digital clone of the actor can be created similar to that used to generate deep fakes that can talk, walk, and interact on screen with co-actors in the film.

1/ Many actors’ dream is to leave a lasting legacy. But the recent use of AI technology raises concerns.
Take James Dean, who passed away in 1955 but is now cast as the star in "Back to Eden" using a digital clone created by AI. 🎥

— Mushfiq Sajib (@mushfiq_sajib) July 25, 2023

Film Editing

AI helps in movie editing processes, particularly in the creation of film trailers.

Among the editing tools in Premiere, the Auto Reframe tool dynamically adjectives aspect ratios, optimising trailers for various platforms including social media.

Additionally, the Colour Match tool ensures consistency in colour grading across diverse shots, enhancing visual coherence throughout projects.

It is noted that AI was used in editing the trailer for the science fiction film ‘Morgan’.

People are missing a crucial part of OpenAI's Sora announcement – it's in this Sora-generated video.
Sora can generate & AUTOMATICALLY EDIT the video.
The AI can recognize that a prompt with "movie trailer" means an edited video.
🤯 That's mindblowing. pic.twitter.com/HnW7milZ5l

— quintin (@QuintinAu) February 16, 2024

Music Generation

The integration of AI into the music industry is gaining momentum, with artists increasingly incorporating AI tools into their live performances and creative processes. For instance, renowned pop icon Madonna used an AI-powered text-to-video tool to produce visuals displayed on screens during her concerts.

In another example, AI-generated a Punjabi-themed Bollywood-inspired song overlaid onto the ‘Kala Chashma’ video.

Plain White T’s react to Drake’s ‘Wah Gwan Delilah’ remix 😭
“It’s crazy that everybody thinks that’s real. That’s not Drake” pic.twitter.com/UzKE4nJxve

— NFR Podcast (@nfr_podcast) June 5, 2024

Script Writing

By processing extensive datasets of existing movie scripts, machine learning algorithms generate original scripts by learning from patterns and structures.

Furthermore, AI plays a crucial role in script analysis for potential film adaptations. Algorithms can study script storylines and identify potential areas for improvement by posing questions and suggesting enhancements.

An example of AI’s usage in script writing is demonstrated in ‘Sunspring’, a sci-fi film scripted entirely by an AI system named Benjamin. In 2023, this film was screened at the Sci-Fi London Film Festival.

Short Films

AI is instrumental in generating or enhancing diverse elements of short films, from images, animations, scripts, music, and editing.

One of the examples is ‘The Frost’, lauded as the world’s first AI-generated film. This 12-minute film employs DALL-E 2, an image-generating AI, to bring its script to life, complemented by D-ID’s animation of static visuals.

The film industry is undergoing a groundbreaking evolution with AI, de-aging stars and creating entire movies. As AI redefines casting, editing, scriptwriting, and more, it promises an era of unparalleled innovation and creativity of filmmaking.

The post 5 Innovative Ways Film Industry is Embracing AI appeared first on AIM.

Here’s every iPhone model that will support Apple’s upcoming AI features (for now)

iPhone 15 Pro Action Button

After staying silent for two years about its AI developments, Apple is finally gearing up to share its latest projects with the public on Monday at its annual developer conference, WWDC. Reports suggest that Apple is unveiling features big and small that will significantly impact your device experience — only if you have one of the newest iPhone models.

Apple is expected to unveil many highly anticipated upgrades, such as a new and improved Siri, new summarization tools, a more customizable home screen, AI-powered photo editing, and more. However, according to Bloomberg, you'll need an iPhone 15 Pro — or a new model iPhone coming out this year — to use these features.

Also: What to expect from WWDC 2024: Siri, AI upgrades, iOS 18, MacOS 15, more

While requiring Apple's latest or upcoming hardware to experience these new features may seem like a money grab, the provision is likely due to the processing hardware necessary to carry out the AI features, especially for tasks that require on-device processing.

On-device processing of AI tasks offers two key benefits: It keeps the information more secure and ensures less latency. However, not all iPhones, especially older models, have the processing power to handle those tasks, and according to the report, the new AI services will rely on both on-device and cloud-based processing, depending on the complexity of the task.

Specifically, these tasks require the A17 Pro chipset, which currently is found only in the iPhone 15 Pro and iPhone 15 Pro Max. Even the iPhone 15 and iPhone 15 Plus are not viable options as they run on the A16 Bionic.

The good news is that if you are a Mac or iPad user, you won't need the newest model. The Bloomberg report notes that to use the AI features on a Mac or iPad, you will need an M1 chip at least. With Apple currently up to M4-chip iPads and M3-chip Macs, users with older devices have some wiggle room.

Also: What is 'Apple Intelligence': How it works with on-device and cloud-based AI

Additionally, if you don't own the iPhone 15 Pro and don't plan on upgrading anytime soon, no worries; you will likely experience some of iOS 18's AI features, specifically those that run on the cloud. However, if you want the full iOS 18 experience, you may want to start preparing for an upgrade.

For the latest news from WWDC, including all announcements, analysis, and hands-on time with the latest technology, stay tuned to ZDNET.

This $100 pet tracker solved my biggest concerns as an anxious dog mom

Invoxia Minitailz on Dog Collar

ZDNET's key takeaways

  • Invoxia's Minitailz Health and GPS Tracker retails for $99 on Amazon or the company website. An additional subscription cost is required.
  • The tracker helps eliminate the guesswork about your dog's whereabouts by tracking its location, activity, and even some biometrics.
  • The subscription cost to use the tracker is more expensive than the actual hardware, making it a long-term investment.

A year ago, I adopted my first dog, a Yorkshire Terrier named Jimmy. Little did I know that with the cute face and floppy years also came loads of parenting anxiety, made real by a $2,000 emergency vet bill. As a result, when I saw the Minitailz Health & GPS Tracker at CES 2024, I had to have it.

Also: 6 ways Apple can leapfrog OpenAI, Microsoft, and Google at WWDC 2024

Leveraging AI and other advanced tech, the gadget tracks your dog's location, activity, and biometrics, such as walking, playing, running, sleeping, and eating times and heart and respiratory resting rates. These capabilities earned it the "best innovation in the AI category" recognition at CES.

View at Amazon

In the box, you get Minitailz, a USB-C charger, and the ring to attach it to your dog's collar. To place it onto your dog's collar, you just slip it right through the opening. Then, just download the app, create an account, select a subscription of either $129.95 for one year or $229.95 for two years, and you are ready to go.

Also: Everything you need for a smart pet setup

To get the most out of your Minitailz, you need to let it gather as much information about your dog as possible, so it should be kept on your dog all day and night. To ensure that my experience with the gadget was as accurate and optimized as possible, I kept it on Jimmy's collar for over three months, and the information it gathered in that time was impressive.

As advertised, it presents all the health and activity insights in a way that is easy to read, access, and understand, as shown in the photos below. However, even though it's nice to have access to all this data, if I am being honest, I rarely look at it. My favorite feature of the app is the daily reports.

The reports present all the data collected in the past 24 hours in a fun, comprehensive way, adding necessary context where it can be helpful, such as comparing your dog's vitals to those of others in the Minitailz community. This is especially useful because, according to the company, a higher-than-normal resting respiratory rate can indicate impending heart failure, so having these insights into Jimmy's health gives me peace of mind.

As a crazy dog mom, this feature alone is worth the investment because it helps bridge the communication gap with my four-legged friend. It shows me things Jimmy can't express, such as whether he had enough playtime, walks, exercise, and a good night's sleep.

The second best feature is the activity notifications. When I write stories from the office, my partner works from home and cares for Jimmy. Instead of feeling like I am missing out, I get notifications on my phone that Jimmy went on a walk, got in a car, or is currently playing. This feature would be especially useful if you leave your pet at a doggy daycare or with a sitter to ensure everything is going according to schedule.

The only caveat is that sometimes it cannot accurately decipher whether Jimmy has the zoomies or is on an actual walk. This caused my partner to call me panicked once, thinking Jimmy had broken out of the apartment when, in reality, he was running around the living room playing with his toys.

Also: Exclusive interview with Raspberry Pi CEO: New $70 AI kit 'a watershed moment for us'

Another great feature for concerned dog parents is the GPS feature, which tracks the pet's location. Compared to the Apple Air Tag, it refreshes less quickly. However, the advantage is that it won't start beeping if you are away for too long. Another perk is that, unlike the AirTag, it is rechargeable, ensuring you control how much battery it has instead of just dying randomly.

The Minitailz's fantastic battery life is also a highlight. It was advertised as lasting two whole weeks, and in my experience, that is accurate. My favorite part is that it notifies you via emails and phone pushes that your pup's battery is running low, making it impossible to miss. I also shared my account login with my partner, which doubles the amount of people who have access to all of Jimmy's insights and needs.

ZDNET's buying advice

If you're a data aficionado and, as a result, a fan of wearables, this gadget is for you and your pet. With the Minitailz, you can access similar insights you get on your smartwatch or fitness tracker for your furry friend. Ultimately, this data helped me learn more about my pet's needs through the comprehensive health data. However, if you don't see the value of having a ton of metrics that aren't necessarily actionable because of the steep subscription cost, I would skip it.

Featured reviews

Indian AI and Robotics Startup Swaayatt Robots Secures $4 Mn to Advance Level 5 Autonomy 

Bhopal-based Indian AI and Robotics startup Swaayatt Robots recently raised $4 million at a valuation of $151 million from US investors. The company will further raise $7 million at a valuation of $175 million.

“In 2021, we raised $3 million, of which we have utilised only $650,000. So in total, we now have $6.3 million to carry our research forward. We are already operating in cities, highways, and off-road locations,” said Sanjeev Sharma, chief of Swaayatt Robots, in an exclusive interview with AIM.

“By the end of this year, we’ll be creating a blueprint that could solve Level Four Autonomy globally. To scale that model, we may raise around $1.5 billion,” said Sharma.

In 2021, investors worldover poured a record $9.7 billion into autonomous vehicle development. However, last year that amount dropped by nearly 60% to just $4.1 billion. According to a McKinsey report, autonomous driving is the future, with self-driving vehicles projected to become a $300 billion to $400 billion revenue opportunity by 2035.

Sharma added that the company is planning a major demo in August. “If the August demo doesn’t excite Sam Altman and Elon Musk, along with the world, I don’t know what will,” he said. Earlier this year, the company executed a demo where it claimed to have achieved Level 5 Autonomy.

The autonomous driving scene in India is still in its nascent stages, but several startups like Minus Zero, Flowdrive, Flux Auto, and Netradyne are actively working to develop self-driving technologies designed for the country’s chaotic road conditions.

Recently, Swaayatt Robots conducted a demo where their autonomous vehicle avoided piles of soil and rocks dumped on the road for construction purposes.

Negotiating a construction site with well-structured traffic cones is typically considered a challenge by the autonomous driving industry in the West. “You will notice our vehicle made a 90-degree turn at an unmanaged intersection,” said Sharma, adding that it is a huge problem even for companies like Tesla and Waymo.

Last year in October, the company enabled autonomous vehicles to negotiate bidirectional traffic on single-lane roads. The demo was conducted at a relative speed of 44 kilometers per hour. “Now, that R&D is going to be scaled up to an extent where the relative velocity at the point of crossing can be maintained at 60 kilometers per hour,” said Sharma.

Focus on Autonomous Driving R&D

As of now, Swaayatt Robots hasn’t really sold their technology to anyone and has been conducting demos with Mahindra Bolero. However, in the near future, the company will focus on OEM integration in existing vehicles, specifically targeting the military and autonomous trucking market.

“For autonomous driving markets in the military domain, we cannot expect new vehicles to be operated in service, you would want to develop a system that can be retrofitted into these aftermarket vehicles,” said Sharma.

He added that aftermarket segments in autonomous trucking in North America are humongous. “If you look at the US trucking market, I mean, it’s an $800 billion addressable market,” said Sharma. He further said that Swaayatt Robots has been in talks with some of the fleet owners in the US.

Regarding India he said, “We have understood the requirements of almost every single OEM in the country. We are in touch with most of the OEMs in India, except for Mahindra. It’s ironic because Anand Mahindra recently praised us.”

Even though we are not in touch with Mahindra directly, he explained that Mahindra’s vehicles are robust and the easiest to modify. “Mahindra has this Mobileye system, which is incorporated into the XUV700.”

Better Than Tesla?

Sharma likes to position Swaayatt as an R&D company, and unlike Tesla at this stage, it is not planning to sell cars anytime soon. “What we are doing is very sophisticated R&D in the general autonomous navigation domain to make autonomous driving happen,” he said.

“There is no agency on the planet, be it Musk’s Tesla or Altman’s OpenAI, that is working on bi-directional traffic,” he said, adding that in the October 2023 demo, their vehicle successfully negotiated bi-directional traffic on a single lane.

He added that by 2030, only four to five autonomous driving companies will survive, highlighting the immense R&D challenges involved. He emphasised that safety is a crucial area to address. Referring to GM’s Cruise accident, he pointed out that it resulted in a total halt of operations and several investigations.

Taking a dig at Comma.ai, he said, “If it were possible to download these packages from the internet and solve the problem of autonomous driving, companies like Tesla and Waymo wouldn’t survive.”

Talking about another Indian startup, RoshAI, he said the company isn’t doing anything new that Cruise, Waymo, and Zoox haven’t already done. “We are not getting funding in India because the country doesn’t invest in R&D,” he said.

Sharma is a huge admirer of Wavye AI. “Since 2019, with the advent of Wavye AI, people have been discussing autonomy without relying on maps. We were the first technology to enable vehicles without the reliance on high-definition maps. In 2017, we implemented multi-RL agents without requiring any maps,” he said.

“Our R&D is focusing on unsupervised learning, where we don’t record labeled data. We have reduced our R&D to the extent that there are only five algorithms now that require labeled data,” he added.

Until now, the company has been conducting demos exclusively with Mahindra Bolero. In the near future, the company plans to demonstrate with multiple vehicles, including Thar and Fortuner. “One demo we will be doing will have Thar and Bolero crossing each other in an autonomous fashion,” concluded Sharma.

The post Indian AI and Robotics Startup Swaayatt Robots Secures $4 Mn to Advance Level 5 Autonomy appeared first on AIM.

Apple Intelligence: How the iPhone’s on-device and cloud-based AI will work

iPhone with iOS 18

Apple is expected to have one of its most groundbreaking Worldwide Developers Conference (WWDC) keynotes on Monday, as the company plans to add major artificial intelligence (AI) features to its operating systems.

But rather than unveiling a slew of flashy generative AI features to knock your socks off, Apple is expected to focus on incorporating AI into its apps to simplify users' daily tasks. And it'll categorize such features under the name "Apple Intelligence."

Also: The best Apple deals of June 2024: iPhones, Apple Watches, iPads, and more

According to Bloomberg, Apple is branding its AI features under Apple Intelligence — and we didn't miss the snarky word play. Apple Intelligence will include the latest AI features coming to its operating systems, including iOS, iPadOS, MacOS, and WatchOS.

Apple Intelligence focuses on broad-appeal AI features rather than advanced image and video generation. To do this, the company developed in-house AI models and partnered with OpenAI to power a chatbot that will work similarly to ChatGPT.

Also: Apple to give Control Center its most useful customization feature ever with iOS 18

Some of the biggest AI features we're expecting with Apple Intelligence include:

  • Improved photo editing, like object removal, with AI in Photos.
  • Greater Siri control over apps and actions, including asking Siri to delete emails or edit photos.
  • AI generation of custom emojis based on text prompts.
  • Generating quick recaps of notes, text messaging threads, emails, and more texts.
  • Automatically suggesting responses for emails and messages.
  • An improved Mail app that can categorize emails and generate messages.
  • Automatic transcription of voice memos.
  • AI enhancements for Xcode to auto-complete code.

Aside from these AI features, Bloomberg reports that iOS 18 will include new customizable icons and interface updates for Control Center, Settings, and Messages. Apple is also expected to launch a new Passwords app to replace the iCloud Keychain and give users a more user-friendly option, similar to 1Password and LastPass.

Also: Apple to unveil 'Passwords' manager app at WWDC 2024: What it is and how it works

On-device vs. cloud-based

Though Apple was rumored to work on different ways to keep its AI running strictly on-device for security and privacy, Apple Intelligence is expected to rely on the cloud for at least some tasks. This will depend on device complexity, resource availability, data privacy considerations, and latency requirements.

Essentially, if a task is simple enough to be processed locally, leveraging the device's processing power and battery life, and requires immediate results, it is more likely to be handled on-device. Tasks involving sensitive data could also prioritize on-device processing, as Apple tries to prioritize data privacy.

Also: ChatGPT privacy tips: Two important ways to limit the data you share with OpenAI

In turn, cloud-based AI processing requires sending data from the device to remote servers that can handle complex or computationally heavy tasks. In Apple's case, tasks requiring processing large amounts of data or updated models could include advanced natural language processing (NLP), intricate analysis, and complex image and video generation.

Depending on its complexity and system requirements, an algorithm will determine whether a task requiring AI should be processed on-device or offloaded to the cloud. Simpler tasks like a Siri request and other basic NLP tasks can be processed on-device. More complex tasks, like generating a detailed summary of a large document, will be sent to the cloud, where more robust processing can occur.

Technical requirements for Apple Intelligence

According to Bloomberg, Apple's new AI features will be compatible with the latest Apple devices, including iPhone 15 Pro or newer models, which run on an A17 Pro chip, and iPads and Macs with an M1 chip or newer. While these AI features may help drive sales of new iPhones and Macs, as a current iPhone 14 Pro Max owner, I hope that at least some will trickle down to older iPhone models. We'll know what the official compatibility list is come WWDC on Monday.

Also: Apple Photos app is getting an AI-powered editing feature to wipe out photobombers

During WWDC, Apple is expected to highlight new security measures for running AI tasks, including chip-based security in data centers for cloud-based processing. It will also reiterate its commitment not to build user profiles based on consumer data.

Perhaps most importantly, users can opt-in for Apple Intelligence features, which will be introduced as beta versions as Apple works to improve its AI capabilities over time.

Featured