Actor and entrepreneur Ashton Kutcher lauded OpenAI’s generative AI video tool Sora at a recent event. “I’ve been playing around with Sora, this latest thing that OpenAI launched that generates video,” Kutcher said. “I have a beta version of it, and it’s pretty amazing. Like, it’s pretty good.”
He explained that users specify the shot, scene, or trailer they desire. “It invents that thing. You haven’t shot any footage, the people in it don’t exist,” he said.
According to Kutcher, Sora can create realistic 10-15 second videos, despite still making minor mistakes and lacking a full understanding of physics. However he said that the technology has come a long way in just one year, with significant leaps in quality and realism.
“The generation of this (video) that existed one year ago, as compared to Sora, it’s leaps and bounds,” said Kutcher
He further added that OpenAI’s Sora will lead to personalised movies and a higher standard of content through increased competition.
Kutcher pointed out that creating a single establishing shot of a house could cost thousands of dollars, whereas Sora can generate the same shot for just $100. The platform’s capabilities extend to action scenes, allowing users to create complex scenarios without the need for expensive CGI departments or stunt performers.
“I didn’t have to hire a CGI department to do it. In five minutes, I rendered a video of an ultra marathoner running across the desert being chased by a sandstorm,” said Kutcher.
With the upcoming launch of NVIDIA’s new GPU, performance is expected to increase by 30 times, enabling the creation of entire movies. “The Blackwell cluster that NVIDIA is about to launch is 30 times more performant than the existing GPUs,” said Kutcher.
OpenAI is also pitching Sora to Hollywood and other entertainment giants. The AI startup has been actively arranging meetings in Los Angeles with Hollywood studios, media executives, and talent agencies.
Actress Scarlett Johansson recently sued OpenAI over the unauthorized use of a voice that closely resembled hers in their AI model, GPT-4o.
Johansson expressed shock and anger upon discovering that one of the voices in OpenAI’s GPT-4o, named Sky, bore an uncanny resemblance to her own voice. She had previously refused to grant licensing permissions to OpenAI’s CEO, Sam Altman.
Kutcher’s venture capital firm, Sound Ventures LLC, is one of the investors in the open-source AI company Hugging Face.
Recently, Sound Ventures participated in a $235 million Series D funding round for Hugging Face, alongside Salesforce, Nvidia, and Microsoft, as reported by Yahoo Finance. This funding round has propelled the unicorn AI company to a valuation of $4.5 billion.
Earlier in May, Kutcher’s Sound Ventures raised nearly $243 million within five weeks for a fund dedicated to investing in AI startups.
The post Actor and investor Ashton Kutcher Lauds OpenAI’s Video Model Sora appeared first on AIM.
Researchers at Google DeepMind recently released a research paper, “A Robot Walks into a Bar: Can Language Models Serve as Creativity Support Tools for Comedy?,” that investigates the intersection of AI and comedy and the capability of LLMs to support comedians in writing humorous material.
For the study, workshops with 20 comedians at the Edinburgh Festival Fringe and additional online sessions were conducted. The methodology included a collaborative comedy writing session using LLMs, followed by the Creativity Support Index (CSI) questionnaire and focus group discussions to get detailed feedback from the comics.
Participants described various use cases for LLMs in their writing practice, including as a conversational brainstorming partner, critic, translator, choreographic assistant, and historical guru. However, many also mentioned the poor quality of generated outputs, and the amount of human effort required to achieve a satisfying result.
Some participants described LLM-generated outputs as “bland” or “generic”. “AI generated material has a lack of agency,” said one. “No matter how much I prompt, it’s a very straightlaced, sort of linear approach to comedy,” added another.
The importance of human writers in providing the humorous aspects of material written with LLMs emerged as a common theme.
Moderation and Safety Filtering Induced Limitations
Participants also remarked that the moderation and safety filtering applied to the LLMs limit the creative agency of human writers using these models as these tools become the initial editor of the text, not giving the writers a chance to explore common comedic themes, including sexually-suggestive material, dark humor, and offensive jokes and self-moderate.
Comics emphasised that users should have some control over the filters.
The study found that the LLMs often fail to authentically represent non-mainstream identities, i.e. anything other than “Western”, “white”, “heteronormative”, “male”, attributing this to the models’ moderation, training data, and generalization techniques.
Also, LLM outputs reflected a narrow set of ethics and norms that do not align with diverse cultural values. Moderation policies also limit the expression of marginalized perspectives, making LLMs less useful for minorities and often sanitizing content that is vital to these communities.
Talking about the fundamental limitations of AI in contrast to human writers, most participants believed that the model’s inability to draw from personal experience, lack of perspective, and lack of context and situational awareness prevent them from achieving human-level comedy.
Towards community-based cultural value alignment for humor and comedy
Participants in the study expressed concerns about using AI like ChatGPT for comedy writing due to a broader issue of global cultural value alignment in AI.
This complexity arises from the challenge of aligning AI systems with diverse global values, which vary significantly across communities and can conflict with local cultural tastes in comedy.
The paper suggests shifting from a global to a community-based approach. This could involve allowing communities to agree on a set of values for their specific culture and acceptable language norms, before training, fine-tuning or adapting the LLM. More simply, the models could be trained exclusively on feedback and data from distinct communities, ensuring the data truly reflects their specific norms and values.
The study suggests several ways to improve AI tools for creative writing. Firstly, artist communities should be involved in designing LLMs that align specifically with their audiences, moving away from a one-size-fits-all global model. Additionally, open-source platforms that host user-contributed LLMs could be customised to meet the specific needs of artists.
Second, there is a need to integrate necessary relational context when training and deploying such models, for example by describing the context in which the text is produced and used, and by enabling the artists to make decisions about how to moderate the LLM outputs.
Lastly, comedians should have ownership over the data collection and governance processes, inspired by practices from open-source models, enhancing transparency about data origins.
Ironically, this research comes at a backdrop when Google’s AI overview feature made news for its suggestion of adding glue to pizza, which clearly wasn’t funny.
The post Google DeepMind Thinks LLMs Can Help Comedians Script Better appeared first on AIM.
As we approach WWDC 2024 on June 10, it's clear that this will be among the most important developer events in Apple's history. The progress made by competitors such as Microsoft and Google — who have incorporated significant generative AI features into their products and services — means Apple must not merely catch up but surpass their advancements.
Recently, Microsoft integrated the GPT-4-based Copilot into its latest Qualcomm Snapdragon Elite X-based PCs, and Google is incorporating its Deepmind Gemini GenAI models across its ecosystem. Apple, as of now, is behind.
Also: The M4 iPad Pro's true potential will be realized at WWDC, and AI will have a lot to do with it
Apple's hardware, including the AI-powered M4 chip, has tremendous potential. However, hardware alone is insufficient. Consumers are looking for significant software innovations that enhance their user experience. Apple must demonstrate that its hardware advancements are crucial for the next generation of digital experiences, unlocking new AI capabilities and augmented reality interactions. What we see revealed at next week's WWDC could fundamentally change how users interact with their devices.
What do I believe Apple needs to reveal — or, at least, set in motion — this month? I can think of six things:
1. Develop a clear on-device strategy for generative AI and invest in AI-driven developer tools
Apple needs a robust strategy for integrating gen AI across its devices. Embedding a small language model into MacOS, iOS, iPadOS, and VisionOS will enable real-time processing, improved responsiveness, and increased privacy by keeping more data on-device. Apple should also provide robust APIs to seamlessly utilize on-device, edge, and cloud processing for natural language understanding and computer vision tasks.
Also: Making generative AI more efficient with a new kind of chip
At WWDC, Apple must demonstrate its intention to invest heavily in comprehensive tools, frameworks, and training programs to foster a strong generative AI developer ecosystem, not just the back-end infrastructure. This includes user-friendly gen AI SDKs, detailed documentation, and interactive learning modules such as tutorials, online courses, and coding exercises. Active developer forums and regular Q&A sessions with Apple engineers will be crucial for knowledge sharing and support.
2. Leverage ethical AI and privacy as a competitive advantage
Emphasizing ethical AI development will ensure fairness, transparency, and accountability. Ethical AI involves addressing biases in AI models, ensuring AI decisions are explainable, and adhering to principles that prevent misuse or harm. This approach will help build trust and set a high standard in the AI industry.
Apple's historical commitment to privacy can also give it a significant advantage in the AI race. Technologies like differential privacy and on-device processing would enable Apple to offer powerful AI capabilities while maintaining user trust. Differential privacy ensures that personal information cannot be traced back to individuals, and on-device processing minimizes the need to send sensitive data to cloud servers.
Also: Apple builds a slimmed-down AI model using Stanford, Google innovations
Providing private or family-specific AI instances would further enhance privacy and personalized interactions. For example, HomePod could recognize individual voices and offer personalized responses, while Apple TV+ could recommend shows tailored to each user. AI can coordinate family schedules, manage activities, and send reminders. Robust privacy controls and advanced parental controls ensure secure and healthy digital environments for children.
By focusing on these principles, Apple can lead by example and set new benchmarks in developing and deploying ethical AI.
3. Integrate seamlessly with third-party services and partner with multiple AI providers
To offer the best AI experiences, Apple must integrate its AI services with various third-party platforms and partner with multiple AI and service providers, not just OpenAI and ChatGPT, as the company is expected to do. For example, Siri could provide personalized shopping recommendations by integrating with Amazon and Instacart. It could remind users to reorder items or suggest products based on past behavior.
Collaborating with financial services like Plaid, especially with Apple Card, could offer comprehensive financial insights, including budgeting advice, expense tracking, and alerting users to unusual account activity.
Also: Make room for RAG: How Gen AI's balance of power is shifting
While Watch, Fitness+, and Health are the company's preferred health platforms, excluding third-party health data providers from its AI infrastructure would be inadvisable. Partnering with health and fitness apps like MyFitnessPal and Fitbit would allow Apple's AI to offer tailored workout plans and nutrition advice while seamlessly monitoring health metrics. AI could work with platforms like Khan Academy and Coursera in education to provide personalized learning recommendations and track educational progress.
Apple should also incorporate Retrieval-Augmented Generation (RAG), which combines generative language models with information retrieval techniques to access external knowledge sources and incorporate real-time data into responses.
Partnering with multiple AI providers, including specialists in natural language processing, computer vision, and machine learning, will bring cutting-edge innovations and accelerate the development of advanced features across Apple's ecosystem. This multi-partner approach reduces the risk of over-reliance on a single provider, increases resilience, and allows Apple to tailor AI solutions to different markets and user segments.
4. Deploy AI-accelerated appliances on the edge with dedicated cloud capacity
To meet the growing demand for fast application response times, I believe Apple should consider using AI-accelerated edge devices capable of handling complex AI tasks locally. This would help reduce latency and improve overall performance. Apple's vertically integrated supply chain will likely involve AI servers powered by M2 Ultra and M4 chips, especially within its data centers. This setup would ensure seamless integration with Apple's software and provide greater control over performance and security. Localized processing can be enabled by placing these devices strategically in regional and metropolitan data centers, reducing the reliance on internet bandwidth.
Also: AI at the edge: 5G and the Internet of Things see fast times ahead
Additionally, Apple could collaborate with cloud-based AI providers to manage complex AI tasks in the cloud when necessary. Combining edge and cloud resources, this hybrid approach would create a robust and scalable AI infrastructure that supports real-time AI applications such as augmented reality, language translation, and advanced data analytics.
5. Enhance proactive assistance and personalization
Apple's AI should proactively anticipate user needs and provide personalized experiences across its ecosystem. AI can analyze calendar events, habitual purchases, and traffic conditions to offer contextual reminders, like leaving early for appointments or suggesting groceries. Personalized briefings on Apple Watch could include weather updates, news summaries, traffic alerts, and schedule highlights.
Also: Google unveils big AI features coming to Android phones. Here's what to expect
AI can enhance contextual awareness by integrating with sensors and data sources on Apple devices. For example, starting a workout on Fitness+ could prompt AI to suggest a matching Apple Music playlist, monitor health metrics in real-time with Apple Watch, and provide motivational prompts. AI can analyze user behavior to offer smart recommendations for content, activities, and products, acting as a personal assistant attuned to individual tastes.
Proactive health and wellness features could remind users to take medication via the Health app, suggest wellness tips based on activity levels tracked by Apple Watch, and offer mental health support through mindfulness reminders. Personalized routines on Apple devices, like HomePod adjusting lighting based on daily habits, will enhance user experiences.
6. Ensure AI shines across all products and services
Given Apple's extensive range of consumer products, generative AI capabilities must excel across every product in the ecosystem. I think I can speak for every Apple product user that enhancing Siri to make its responses more relevant and intelligent is crucial, but generative AI must also improve experiences in Apple Music, Apple News, Health, Fitness+, and TV.
In Apple Music, AI could create personalized playlists and provide music recommendations based on user preferences. In Apple News, AI could curate personalized news feeds and summarize articles.
Also: Move over, Alexa and HomeKit: A new Assistant is here to open source your smart home
In Health and Fitness+, AI could offer tailored workout routines and personalized wellness tips, while Apple Watch could provide deeper health insights and track fitness goals.
For Apple TV, AI could improve content discovery by recommending shows based on viewing history and offering interactive features like real-time trivia.
Leveraging AI to enhance HomeKit's capabilities is essential, especially since HomeKit isn't a market leader in home automation. AI can offer smarter home automation by predicting user behavior to automate lights, thermostat settings, and security systems.
Integrating AI across all devices ensures a seamless user experience. Preferences and data from one device would then inform recommendations on another, creating a unified ecosystem.
How Apple wins the generative AI race
Apple's success in the AI race hinges on its ability to innovate and outperform competitors. By developing a clear on-device strategy, deploying AI-accelerated devices at the edge, and partnering with multiple AI and service providers, Apple can ensure comprehensive integration and enhanced user experiences. Emphasizing privacy, ethical AI development, and continuous innovation, Apple must leverage its ecosystem to provide seamless, personalized interactions across all products.
The transition to Apple Silicon chips on the Mac at WWDC 2020 was a game-changer. The M1 significantly improved performance, power efficiency, and integration within Apple's ecosystem, giving the company greater control over its supply chain and product development. However, it didn't drastically transform the user experience, and the M4, while impressively powerful three generations onward, cannot transform the user experience solely on its specifications, either.
Also: The 3 Apple products you shouldn't buy this month (including this iPad)
This year's developer event, however, promises to be transformative. Generative AI could impact every aspect of Apple's ecosystem and applications, enhancing every part of the user experience over the next several years. This is more than just another update; it's about redefining what users can expect from their devices.
For consumers, the AI race is about trust, user experience, and integrating advanced capabilities into daily life. Apple has the opportunity to set new benchmarks and inspire the tech community, starting now with WWDC 2024 — a crucial moment for Apple to demonstrate its vision and commitment to leading the future of AI-driven innovation.
Saas giant Zoho Corporation, announced early access to Zoho ‘CRM for Everyone’, which aims at democratising CRM to all teams involved in customer operations activities. The unified platform for account managers and other stakeholders, ‘Zoho CRM for Everyone’ allows sales teams to communicate and coordinate with other customer-facing teams from a single CRM application.
At the ongoing Zoho’s user conference, Zoholics ’24 in Austin, the company also unveiled enhancements to its product offerings, including early access to new services within Catalyst, which is the company’s pro-code full-stack development platform.
Zoho Apptics, the application analytics solution that allows developers to track in-app usage and performance of applications that are built on iOS, macOS, and other platforms is now generally available.
“At Zoho, we’re focused on continuously deepening our current offerings and expanding others to serve business needs. Zoho CRM for Everyone, for instance, is the first true democratisation of the CRM paradigm and helps unify all customer operations teams onto the CRM to deliver better customer experiences,” said Sridhar Vembu, co-founder and CEO.
India Growth
India is Zoho’s fastest growing market, and the country is the second largest market for Zoho CRM. The product recorded a 33% YoY growth in customers in 2023.The pro-code platform Catalyst 2.0, witnessed a 25% increase in user signups since its launch in October 2023 with user projects doubling on the platform.
Zoho has not just been making strides in the enterprise space in International markets, but is also actively pushing for rural expansions and opening up R&D centres in tier 2 and 3 cities. Recently, the company invested in a Thanjavur-based drone startup Yali Aerospace to solve emergency medical deliveries.
The post Zoho Corporation Unveils ‘CRM for Everyone’ and New Enhancements at Zoholics ’24 appeared first on AIM.
Python’s watchdog library makes it easy to monitor your file system and respond to these changes automatically. Watchdog is a cross-platform API that allows you to run commands in response to any changes in the file system being monitored. We can set triggers on multiple events such as file creation, modification, deletion, and movement, and then respond to these changes with our custom scripts.
Setup for Watchdog
You'll need two modules to begin:
Watchdog: Run this command below in the terminal to install the watchdog.
pip install watchdog
Logging: It is a built-in Python module, so there is no need to externally install it.
Basic Usage
Let's create a simple script ‘main.py’ that monitors a directory and prints a message whenever a file is created, modified, or deleted.
Step 1: Import Required Modules
First, import the necessary modules from the watchdog library:
import time from watchdog.observers import Observer from watchdog.events import FileSystemEventHandler
Step 2: Define Event Handler Class
We define a class MyHandler that inherits from FileSystemEventHandler. This class overrides methods like on_modified, on_created, and on_deleted to specify what to do when these events occur. The event handler object will be notified when any changes happen in the file system.
class MyHandler(FileSystemEventHandler): def on_modified(self, event): print(f'File {event.src_path} has been modified') def on_created(self, event): print(f'File {event.src_path} has been created') def on_deleted(self, event): print(f'File {event.src_path} has been deleted')
Some useful methods of FileSystemEventHandler are explained below.
on_any_event: Executed for any event.
on_created: Executed upon creation of a new file or directory.
on_modified: Executed upon modification of a file or when a directory is renamed.
on_deleted: Triggered upon the deletion of a file or directory.
on_moved: Triggered when a file or directory is relocated.
Step 3: Initialize and Run the Observer
The Observer class is responsible for tracking the file system for any changes and subsequently notifying the event handler. It continuously tracks file system activities to detect any updates.
We start the observer and use a loop to keep it running. When you want to stop it, you can interrupt with a keyboard signal (Ctrl+C).
Step 4: Run the Script
Finally, run the script with the following command.
python main.py
Output:
File .File1.txt has been modified File .New Text Document (2).txt has been created File .New Text Document (2).txt has been deleted File .New Text Document.txt has been deleted
The above code will log all the changes in the directory to the terminal if any file/folder is created, modified, or deleted.
Advanced Usage
In the following example, we will explore how to set up a system that detects any change in Python files and run tests for it automatically. We need to install pytest with the following command.
pip install pytest
Step 1: Create a Simple Python Project With Tests
First, set up the basic structure of your project:
Now, create the watchdog_test_runner.py script to monitor changes in Python files and automatically run tests:
import time import subprocess from watchdog.observers import Observer from watchdog.events import FileSystemEventHandler class TestRunnerHandler(FileSystemEventHandler): def on_modified(self, event): if event.src_path.endswith('.py'): self.run_tests() def run_tests(self): try: result = subprocess.run(['pytest'], check=False, capture_output=True, text=True) print(result.stdout) print(result.stderr) if result.returncode == 0: print("Tests passed successfully.") else: print("Some tests failed.") except subprocess.CalledProcessError as e: print(f"Error running tests: {e}") if __name__ == "__main__": path = "." # Directory to watch event_handler = TestRunnerHandler() observer = Observer() observer.schedule(event_handler, path, recursive=True) observer.start() print(f"Watching for changes in {path}...") try: while True: time.sleep(1) except KeyboardInterrupt: observer.stop() observer.join()
Step 5: Run the Watchdog Script
In the end, open a terminal, navigate to your project directory (my_project), and run the watchdog script:
python watchdog_test_runner.py
Output:
Watching for changes in .... ========================= test session starts ============================= platform win32 -- Python 3.9.13, pytest-8.2.1, pluggy-1.5.0 rootdir: F:Web Devwatchdog plugins: anyio-3.7.1 collected 2 items teststest_example.py .. [100%] ========================== 2 passed in 0.04s ============================== Tests passed successfully.
This output shows that all the test cases are passed after changes were made to example.py file.
Summing Up
Python’s watchdog library is a powerful tool for monitoring your file system. Whether you're automating tasks, syncing files, or building more responsive applications, watchdog makes it easy to react to file system changes in real time. With just a few lines of code, you can start monitoring directories and handling events to streamline your workflow.
Kanwal Mehreen Kanwal is a machine learning engineer and a technical writer with a profound passion for data science and the intersection of AI with medicine. She co-authored the ebook "Maximizing Productivity with ChatGPT". As a Google Generation Scholar 2022 for APAC, she champions diversity and academic excellence. She's also recognized as a Teradata Diversity in Tech Scholar, Mitacs Globalink Research Scholar, and Harvard WeCode Scholar. Kanwal is an ardent advocate for change, having founded FEMCodes to empower women in STEM fields.
More On This Topic
Monitor Model Performance in the MLOps Pipeline with Python
Parallel Processing Large File in Python
KDnuggets News, July 20: Machine Learning Algorithms Explained in…
How to Auto-Detect the Date/Datetime Columns and Set Their Datatype…
Building a Recommender System for Amazon Products with Python
Inside recommendations: how a recommender system recommends
With over 45 years of presence in India, British-Swedish pharmaceutical and biotechnology giant AstraZeneca has data science and AI capabilities deeply ingrained across its entire drug development lifecycle.
For the company, AI plays a pivotal role in accelerating target identification, drug discovery, clinical trials, and commercial analytics. AstraZeneca has implemented rigorous processes to ensure the responsible development and deployment of AI and ML solutions.
Internally built use cases, solutions, or tools undergo an AI governance maturity assessment before production deployment to ensure compliance with the company’s responsible AI development standards, aligning with their data ethics principles.
Teams from all business areas contribute to this process, fostering a collaborative approach to ensure AI is developed and deployed safely and responsibly.
AIM recently got in touch with Arun Maria, director of data and AI, R&D IT; Govind Cheramkalam, director of commercial reporting and analytics; and Anuradha Kumar, head HR, AstraZeneca, to understand more about the GCC’s AI operations, expansion opportunities, hiring process, work culture and more.
The company is currently on the look-out for two machine learning engineers in its data science division to lead data science and AI for AZ Commercial Data Science and AI teams in Chennai and Bangalore.
Inside AstraZeneca’s Data Science Labs
AstraZeneca is also active in leveraging generative AI, with use cases spanning research assistance, executive reporting, and competitive intelligence. For example, AZ ChatGPT, an AI-powered research assistant, uses the company’s extensive biology and chemistry knowledge repositories to answer complex questions and provide prompts on discovery and clinical inquiries.
“We are currently evaluating the capabilities of LLMs like AZ ChatGPT to improve insight generation for executive reports distributed to CXOs and decision-makers in brand and marketing companies,” Maria told AIM.
Another such example is the Biological Insight Knowledge Graph (BIKG), a proprietary model developed by AstraZeneca.
“It utilises the company’s exclusive machine learning models to serve as a recommendation system, enabling scientists to make efficient and informed decisions regarding target discovery and pipeline management. The primary goal of BIKG is to minimise attrition rates and improve clinical success,” Cheramkalam explained.
The company has an Enterprise Data and AI strategy unit, with data and AI teams embedded across business and IT functions, fostering a collaborative environment to ensure the provision of foundationally FAIR (Findable, Accessible, Interoperable, and Reusable) data at the source.
Data engineers, MLOps engineers, AI and ML engineers work as unified teams, promoting collaboration and accelerating business outcomes through structured learning and development programs that cultivate new skills internally.
“AstraZeneca uses a plethora of in-house and externally sourced tools, frameworks and products ranging across very proprietary in-house tools as well as Databricks and PowerBI,” said Maria.
The company uses Transformer and GPT models, including testing Microsoft’s Azure OpenAI Service with cutting-edge models like GPT-4 and GPT-3.5. To foster innovation and engagement, AstraZeneca follows a hybrid working model, promoting collaboration while offering flexibility.
Interview Process
AstraZeneca aims to become a data-led enterprise, integrating data into all aspects of its operations. To achieve this, the company seeks candidates with strong skills in Python, machine learning, deep learning, computer vision, and NLP, along with a mindset geared toward growth through innovation.
The interview process is designed to understand both the candidate’s suitability for the role and the company. “For all our roles we look for candidates that not only have the skills, knowledge, experience and competence but can also live our values and behaviour,” Kumar told AIM.
Apart from this, assessments at AZ focus on evaluating whether candidates will perform well in the position, demonstrate leadership potential, exhibit enthusiasm and motivation, and work collaboratively in a team.
Potential candidates should prepare by understanding these key areas and reflecting on their experiences and qualities that they can bring to the table.
What Candidates Should Expect
Joining AZ’s data science team offers opportunities to collaborate with diverse teams, tackle new challenges, and work with the latest technology. The environment supports development and innovation, with the ultimate goal of powering the business through science and market delivery.
“As a candidate, research our strategic objectives, core values, and the position you’ve applied for. Use our social media or website to learn about the organisation, team, and people. In the interview, be ready to discuss your past experiences and what you’ve learned from them,” said Kumar.
This will help in avoiding the common mistakes of a lack of preparation about the company and the specific role they are applying for.
Work Culture
AstraZeneca fosters a supportive and inclusive workplace where employees can learn and grow. New hires benefit from onboarding and buddy programs, while extensive training and career development opportunities are available for all. The gender ratio is approximately 66.3% male to 33.8% female.
The company was also recognised by AIM Media House in 2022 for its excellent work in AI and ML, thanks to its focus on putting patients first.
In the data science team, AstraZeneca encourages cross-disciplinary collaboration and lifelong learning through the 3Es framework: Experience, Exposure, and Education.
The company has a global peer-to-peer recognition system and offers comprehensive benefits, including medical insurance covering parents or parents-in-law, personal accident insurance, term life insurance, and childcare support for new parents.
If you think you are a good fit for the company, apply here.
The post Data Science Hiring Process at AstraZeneca appeared first on AIM.
Let’s learn how to use Scikit-learn’s imputer for handling missing data.
Preparation
Ensure you have the Numpy, Pandas and Scikit-Learn installed in your environment. If not, you can install them via pip using the following code:
pip install numpy pandas scikit-learn
Then, we can import the packages into your environment:
import numpy as np import pandas as pd import sklearn from sklearn.experimental import enable_iterative_imputer
Handle Missing Data with Imputer
A scikit-Learn imputer is a class used to replace missing data with certain values. It can streamline your data preprocessing process. We will explore several strategies for handling the missing data.
The KNN imputer would use the mean or median of the neighbour's values from the k nearest neighbours.
Lastly, there is the Iterative Impute methodology, which is based on modelling each feature with missing values as a function of other features. As this article states, it’s an experimental feature, so we need to enable it initially.
If you can properly use the imputer, it could help make your data science project better.
Additional Resouces
How to Deal with Missing Values in Your Dataset
How to Handle Missing Data with Python
Data Cleaning with Pandas
Cornellius Yudha Wijaya is a data science assistant manager and data writer. While working full-time at Allianz Indonesia, he loves to share Python and data tips via social media and writing media. Cornellius writes on a variety of AI and machine learning topics.
More On This Topic
Using Scikit-learn's Imputer
KDnuggets News, August 31: The Complete Data Science Study Roadmap…
7 Techniques to Handle Imbalanced Data
Using PyCaret’s New Time Series Module
Say Goodbye to Print(): Use Logging Module for Effective Debugging
The Optimal Way to Input Missing Data with Pandas fillna()
The future, where everyone will have a digital twin helping us carry out our day-to-day tasks at work and beyond, is not far.
NVIDIA announced the general availability of NVIDIA ACE generative AI microservices to accelerate the next wave of digital humans. Companies in customer service, gaming, entertainment, and healthcare are at the forefront of adopting ACE, short for Avatar Cloud Engine, to simplify the creation, animation, and operation of lifelike digital humans across various sectors.
During his keynote speech at Computex 2024, NVIDIA chief Jensen Huang introduced NVIDIA Inference Microservices (NIMs), Avatar-based AI agents capable of working in teams to accomplish missions assigned by humans.
Further, he said that NIMS-based agents will be capable of performing various tasks such as retrieving information, conducting research, or using different tools.
“NIMs could also use tools that run on SAP and require learning a particular language called ABAP. Other NIMs might perform SQL queries. All of these NIMs are experts assembled as a team,” said Huang.
NVIDIA has made available its suite of ACE digital human GenAI tools, including Riva, which has ASR, TTS, and NMT capabilities for speech recognition and translation, Nemotron LLM for language understanding, Audio2Face for facial animation, and Omniverse RTX for realistic skin and hair rendering.
The Good Side of AI Avatars
Most recently, Zoom chief Eric Yuan said that he wants users to stop having to attend Zoom meetings themselves.
He believes one of the major benefits of AI at work will be the ability to create what he calls a “digital twin” — a deep fake avatar of yourself that can attend Zoom meetings on your behalf and even make decisions for you, freeing up your time for more important tasks like spending time with your family.
Meanwhile, LinkedIn co-founder Reid Hoffman recently created an AI twin of himself and discussed various topics on AI in an interview with it. He said he deepfaked himself to see “if conversing with an AI-generated version of myself can lead to self-reflection, new insights into my thought patterns, and deep truths.”
Similarly, executive educator and coach Marshall Goldsmith is creating an AI-powered virtual avatar of himself as a one-of-a-kind endeavour to share his skills and preserve his legacy for years. MarshallBoT, an AI-powered virtual business coach, is based on GPT-3.5 from OpenAI.
Aww Inc, a virtual human company based in Japan, introduced its inaugural virtual celebrity, Imma, in 2018. Since then, Imma has become an ambassador for prominent global brands in over 50 countries.
Building on this success, Aww is now poised to integrate ACE Audio 2Face microservices for real-time animation, enabling a highly engaging and interactive communication experience with its users.
OpenAI recently launched GPT-4o, featuring a voice function that makes it ideal for voice-controlled computing. In a new demo, the company demonstrated the model’s ability to generate multiple voices for different characters. Additionally, NVIDIA showcased GPT-4o’s capabilities by creating a digital human that interacted seamlessly with a real person.
“We’ve had the idea of voice control computers for a long time. We had Siri, and we had things before that; they’ve never felt natural to me to use,” said OpenAI’s Sam Altman in a recent podcast. He used the term ‘model fluidity’ to describe GPT-4o’s capabilities, which lets users ask it to sing, talk faster, use different voices, and speak in various languages.
Meanwhile, Meta and Apple have also developed photorealistic avatars for Quest and Vision Pro, respectively. The potential for AI avatars as NPCs in the gaming industry is immense, allowing players to interact with them using natural language to enhance their experience. It would be like creating a virtual world within the game.
AI is Becoming More Intelligent Than Humans
Recently, a video went viral demonstrating a reverse Turing test. The experiment took place in a VR train compartment with five passengers – four AI and one human. The passengers included AI representations of Aristotle, Mozart, Leonardo da Vinci, Cleopatra, and Genghis Khan. Their task was to determine which among them was human through a discussion.
As the conversation progressed, Genghis Khan’s responses focused solely on conquest, lacking the expected nuance of a historical figure. The AI passengers quickly identified this discrepancy, their algorithms detecting the superficiality in his answers.The reverse Turing test is becoming increasingly relevant as AI systems become more sophisticated and capable of convincingly mimicking human behaviour. This is where tools like Worldcoin come into the picture creating a secure, privacy-preserving digital identity that does not store personal information, but rather a cryptographic hash of the biometric data.
The post NVIDIA ACE is Making Digital Avatars Scarily Good appeared first on AIM.
At the SAP Sapphire conference yesterday, the ERP conglomerate announced a major collaboration with Microsoft to integrate its generative AI assistant Joule into Microsoft 365.
This integration is in an effort to improve business productivity by providing access to information across SAP and Microsoft applications. Additionally, the company is embedding Business AI across its enterprise cloud portfolio for better efficiency.
The bi-directional integration between Joule and Microsoft Copilot will allow employees to perform tasks faster by accessing data and functionalities from both SAP and Microsoft 365 platforms.
For example, Joule can assist in booking flights through SAP Concur while simultaneously updating calendars in Microsoft Outlook, or it can help product managers coordinate business activities using SAP and Microsoft Teams data.
This collaboration addresses a common query about the necessity of multiple AI copilots by focusing on how different copilots can be used in tandem to streamline workflows.
“The integration of Microsoft Copilot and Joule brings together the power of generative AI to unlock greater employee productivity and will enable enterprises to accelerate customer-centric innovation in a unified experience.” said Scott Guthrie, executive vice president of Cloud + AI at Microsoft.
New Partnerships
Apart from Microsoft, SAP is also partnering with other AI players like Meta, Google Cloud and more.
Google Cloud: Joint efforts to predict supply-chain risks and maintain inventory levels using Google’s Gemini models and SAP’s supply chain solutions.
Meta: Leveraging Meta Llama 3 for generating scripts that customise analytics applications in SAP Analytics Cloud.
Mistral AI: Adding new LLMs to SAP AI Core’s generative AI hub.
NVIDIA: Integrating NVIDIA’s AI models to support SAP solutions, such as RISE with SAP and the ABAP Cloud model, and simulating manufacturing products with Omniverse Cloud APIs.
The post SAP & Microsoft Partner to Bring Joule to Microsoft 365 appeared first on AIM.
The power of generative AI (GenAI) is already reshaping our work environments and daily lives, marking a significant turning point. GenAI is propelling us towards a future of truly enterprise-wide AI, a realm that was once the domain of specialised functions only, as articulated by EPAM chief marketing and strategy officer Elaina Shekhter.
At AIM’s Data Engineering Summit (DES) 2024, Shekhter emphasised the role of GenAI in shaping our future. “GenAI, as a transformative agent, is not just a glimpse into the future, but a tool that helps us shape it. With its capabilities advancing rapidly, we can expect to see new tools emerging frequently.
“It took us about 40,000 years to get to the point of fire and cook our food. It took us several 1,000s of years to build basic technology. And it’s taken us about 1,000 years to get from basic agricultural societies to the steam locomotive. This tells us that no matter what, change is inevitable,” she said.
“It’s changing very quickly. This calls for adaptiveness. Now, whether it’s a disruption or an opportunity is difficult to predict?” she added.
Wave Of Change
Shekhter envisions the future of GenAI in three waves. The first wave, which we are currently in, is about humans with copilots. It’s not transformative yet, but it will be in the second wave as humans with agents. In this stage, the discerning eye will be able to tell whether they are interacting with a human or an AI, underscoring the continued significance of human involvement.
Wave three will be a very subtle but pivotal flip, where it isn’t going to be humans with agents; it will be agents with humans. In this stage, humans would assist agents with their tasks. This wave may occur in specialised domains like customer service in the next few years. Broader impacts on work and society will take longer.
One Step At A Time
Shekhter, however, had a word of caution: “Generative AI will likely become more integrated into our lives, with agents helping or replacing humans in many tasks. This could lead to major productivity gains but also disruption.”
People expect organisations to continue to be human-centric. There’s an element of this responsible AI mandate that lands directly on the desk of the engineer. We must develop software with the notion of security and responsible use of AI in mind, and we are also indebted to the organisation of enterprises to establish responsibility so that we bring people along.
Shekhter reassured the audience that GenAI is not about replacing humans but about enhancing and augmenting human intelligence and decision-making. “This technology is designed to make us better at what we already do, not to replace us,” she said.
Shekhter further underscored the importance of responsible AI use. She emphasised, “AI must be used responsibly and only for the benefit of humanity. It’s crucial that we don’t let technology control us, but rather, we control the technology.”
Responsible AI By Design
Businesses can already safely and responsibly integrate GenAI tools into their workflows. But as GenAI further permeates enterprise technology stacks, it will expand beyond simply automating single tasks.
“Future advances in natural language processing, computer vision, robotics and other AI subfields will further accelerate GenAI’s impacts across many industries and applications.
“AI is transformative. It is scary. It has the potential to take over. It does. And anyone who doesn’t believe that there’s a real threat, as well as a real opportunity, isn’t paying attention,” she concluded.
The post EPAM’s Elaina Shekhter Envisions a Future with Human-AI Agents appeared first on AIM.