Legendary computer scientist Gordon Bell passed away last Friday at his home in Coronado, CA. He was 89. The New York Times has a nice tribute piece. A long-time pioneer with Digital Equipment Corp, he pushed hard for development of lower cost, high-performing alternatives to the IBM mainframe. In recent years, he was most familiar to the HPC community because of the prestigious ACM Gordon Bell Prize, named in his honer, and presented at the annual SC conference each year.
Here’s an excerpt from the NYT article:
Gordon Bell
“C. Gordon Bell, a technology visionary whose computer designs for Digital Equipment Corporation fueled the emergence of the minicomputer industry in the 1960s, died on Friday at his home in Coronado, Calif. He was 89.
“The cause was pneumonia, his family said in a statement.
“Called the “Frank Lloyd Wright of computers” by Datamation magazine, Mr. Bell was the master architect in the effort to create smaller, affordable, interactive computers that could be clustered into a network. A virtuoso at computer architecture, he built the first time-sharing computer and championed efforts to build the Ethernet. He was among a handful of influential engineers whose designs formed the vital bridge between the room-size models of the mainframe era and the advent of the personal computer.
“After stints at several other startup ventures, Mr. Bell became the head of the National Science Foundation’s computers and information science and engineering group, where he directed the effort to link the world’s supercomputers into a high-speed network that led directly to the development of the modern internet. He later joined Microsoft’s nascent research lab, where he remained for about 20 years before being named researcher emeritus.”
HPCwire interviewed Bell back in Feb 2023 on the creation of the special ACM Gordon Bell Prize for Climate Modeling and he made is view on the importance of acknowledging and combating climate change very clear:
“One of the big things that’s been going on is just not admitting that climate is manmade,” Gordon Bell told HPCwire. “Maybe one of the great contributions to computing has been being able to model well enough to convince people that it’s manmade.”
“The prize this time… I woke up angry about – I shouldn’t say angry about, but just disappointed [that] there’s not a really good ledger about what’s going on here,” Bell said. “We’re not at any level – I don’t think we’re at a good national level yet as to: what are our commitments, and how are we doing vis-à-vis the commitments? To me, it’s so vague at this point. That’s one aspect. The prize I hope will help a little bit in that. … In order to get change going on, you really have to assign what you’re doing, and I just am so disappointed. It’s such a massive problem.”
The interview by Oliver Peckham was recorded and worth a listen.
The media, including movies, have made AI conversations confusing and overly emotional. I propose a simple exercise for middle and high school students to provide hands-on training in understanding how an AI model works and the vital role of the AI Utility Function. The exercise will focus on something we do nearly every day: deciding where to dine. We will follow a four-step process to define, create, and execute our “Dining Recommendation” AI Utility Function.
In Part 1 of this two-part blog series, we covered the first two steps:
In the first step, students were introduced to the foundational concept of defining relevant variables and metrics, such as affordability, food quality, and ambiance. This stage is essential for setting up the criteria the AI will use later to evaluate options. Students learn to think critically about the factors that most influence their decisions, mirroring how AI systems must be programmed to consider various elements in their algorithms.
In Step 2, we create entity models for each restaurant. Each establishment is scored on a scale of 1 to 100 based on defined variables in this step. This quantification process demonstrates to students how raw data and subjective assessments are standardized for fair comparisons across different options. It clearly illustrates how AI systems process and evaluate diverse inputs.
In Part 2, we will finish the exercise by covering steps 3 and 4. This hands-on exercise aims to clarify AI for students and emphasize the significance of defining AI Utility Functions to guarantee ethical and responsible AI results. By grasping these processes, students can understand how AI models are created to mirror human values and societal norms.
Step 3: Personalize Variable Weights
In step 3, students will explore the significance of variable weighting by assigning weights to each of the ten variables, reflecting their intent and dining preferences. For instance, if students seek a quick, inexpensive meal, they may prioritize affordability, convenience, and value for money. Conversely, the emphasis might shift to ambiance, social media reviews, and menu quality for a date night. Similarly, if food quality is prioritized over cost, a higher weight could be assigned to the food quality variable (Figure 1).
Figure 1: Personalize Variable Weights Based On Your Intent
In this activity, students should be encouraged to experiment with the weights of each of the ten variables to see how that changes the AI model’s recommendations. This will help them understand the concept of weighted decision-making, where all factors are not equally important, and the outcome should reflect their priorities.
Step 4: Calculate AI Model Restaurant Recommendations
In the final step of our AI Utility Function Exercise, we will simulate the decision-making process typical of AI systems by calculating a composite score for each restaurant. Based upon the personalized variable weights associated with my dining intent, we can now calculate each restaurant’s relative score, ranking them based on the student’s individualized criteria.
This calculation involves a straightforward mathematical operation: the scores from Step 2—each restaurant’s performance on the ten variables—are multiplied by the individual weights assigned by the students in Step 3, reflecting their dining preferences. This step demonstrates how AI leverages user preferences in its prioritization process and how students’ variable weights shape AI’s decisions and recommendations. The results of these multiplications are then summed to produce a total score for each restaurant (Figure 2).
Figure 2: Calculate AI Model Restaurant Recommendations
By performing these calculations within a spreadsheet, students can visually track how each variable’s weight influences the AI model’s recommendation, clearly seeing which restaurant best aligns with their personalized criteria. This approach demystifies the computational aspect of AI utility functions and reinforces the importance of weighting in achieving personalized outcomes.
Note: Moving from handling a few variables and a few restaurants to analyzing tens of thousands of variables across thousands of restaurants dramatically increases computational demands. Machine learning algorithms become essential for efficiently handling these vast datasets and complex variable interactions. Reinforcement Learning (RL) algorithms enable continuous learning and recommendation adjustments to optimize our dining experience in dynamic environments. For example, if unexpected traffic affects accessibility to our highest-prioritized restaurant, the RL-powered AI Utility Function can recalculate and prioritize the next best dining options in real time.
AI Utility Function Exercise Summary – Part 2
This “Becoming an AI Utility Function” exercise helps students understand how AI models make personalized recommendations by using a familiar activity – deciding where to dine. It involves four critical steps to enhance the understanding of how an AI model works to make a recommendation or decision.
In Step 1, students are introduced to the foundational concept of defining relevant variables and metrics, such as affordability, food quality, and ambiance. This stage is essential for setting up the criteria the AI model will use later to evaluate dining options. Students learn to think critically about the factors that most influence their decisions, mirroring how AI systems must be programmed to consider various elements in their algorithms.
In Step 2, we create analytic profiles or entity models for each restaurant. Each restaurant is scored on a scale of 1 to 100 based on the variables defined in Step 1. This quantification process demonstrates to students how raw data and subjective assessments are standardized for fair comparisons across different options. It also illustrates how AI systems process and evaluate diverse inputs.
In step 3, students significantly impact the AI model by personalizing or customizing the variables’ weights based on their specific dining intent and dining experiences. For example, they may be looking for a quick and affordable meal or a more ambient setting for a date night. They can then observe how these weights influence the final recommendations.
In step 4, the exercise ends with each restaurant’s composite score calculation, revealing which best matches their preferences.
Hopefully, this exercise demystifies the AI’s decision-making process and highlights the importance of brainstorming the variables and weighing them based on their intent and desired outcomes.
In this educational activity, students gained practical experience with the principles of the AI utility function. They learned that the effectiveness of AI’s decision-making processes is greatly influenced by the precise definition and alignment of variables with the user’s intent. This hands-on exploration showcased AI technologies’ capacity to make well-informed, ethical, and personalized decisions in real-life scenarios.
It’s Jensen Huang’s world, and we’re all just living in it. Recently, the AI chip giant reported a profit of $14.881 billion, up 629% compared to the corresponding quarter last year. Revenue was $26.04 billion, surpassing the estimated $24.65 billion.
NVIDIA controls a whopping 95% of the AI chip market right now. To put it in perspective, NVIDIA is now larger than Tesla and Amazon combined. Furthermore, NVIDIA is now larger than the entire German stock market.
BREAKING: Nvidia stock, $NVDA, is now trading with a market cap above $2.5 TRILLION for the first time in history. To put this in perspective, Nvidia is now larger than Tesla and Amazon COMBINED. Furthermore, Nvidia is now larger than the entire German stock market. Nvidia is… pic.twitter.com/Uaj9mgFoML
— The Kobeissi Letter (@KobeissiLetter) May 22, 2024
NVIDIA X (OpenAI Spring Update, Google I/O and Microsoft Build)
Last week, we saw several tech events, from OpenAI’s Spring update to Google’s I/O and Microsoft Build. What tied them together was NVIDIA, which made a notable presence at all three.
OpenAI released GPT-4o, which won hearts with its ‘omni’ capabilities across text, vision, and audio. OpenAI’s demos included a real-time translator, a coding assistant, an AI tutor, a friendly companion, a poet, and a singer. However, this would not have been possible without NVIDIA.
“I just want to thank the incredible OpenAI team and thanks to Jensen and the NVIDIA team for bringing us the advanced GPU to make this demo possible today,” said OpenAI CTO Mira Murati at the end of the event, expressing gratitude to NVIDIA. Notably, Huang personally delivered the first DGX H200 to OpenAI last month.
Similarly, both Microsoft and Google were quick to claim that they were the first ones to get their hands on NVIDIA’s latest GPU, Blackwell.
“We are also proud to be one of the first cloud providers to offer NVIDIA’s cutting-edge Blackwell GPUs, available in early 2025,” said Google Chief Sundar Pichai at Google I/O.
“We’re bringing in the latest H200s to Azure later this year and will be among the first cloud providers to offer NVIDIA’s Blackwell GPUs in B100 as well as GB200 configurations,” said Microsoft chief Satya Nadella at Microsoft Build.
He added that Microsoft is continuing to work with NVIDIA to train and optimise models like GPT-4o and small language models like the Phi-3 family.
Apart from the above three players, leading LLM companies such as Adept, Anthropic, Character.AI, Cohere, Databricks, DeepMind, Meta, Mistral, xAI, and many others are building on NVIDIA AI in the cloud.
Blackwell Fever Begins
NVIDIA’s data centre revenue alone was $22.6 billion, a record, up 23% sequentially and 427% year-on-year, driven by continued strong demand for the NVIDIA Hopper GPU computing platform.
‘The demand for GPUs in all data centres is incredible. We’re racing every single day. And the reason for that is because of applications like ChatGPT and GPT-4o,’ said Jensen, adding that the future is going to be multi-modal.
Moreover, he announced that NVIDIA will build a new chip every year. “I can announce that after Blackwell, there’s another chip. We’re on a one-year rhythm,” said Huang on the earnings call. He further stated that the Blackwell platform is in full production and forms the foundation for trillion-parameter scale generative AI.
“Our production shipments will start in Q2 and ramp in Q3, and customers should have data centres stood up in Q4,” he said. “Blackwell’s time-to-market customers include Amazon, Google, Meta, Microsoft, OpenAI, Oracle, Tesla, and xAI,” said NVIDIA CTO Colette Kress.
In India, Yotta is India’s sole NVIDIA Partner Network Cloud Partner (NCP) and will receive NVIDIA’s latest GPU Blackwell by October. Yotta plans to scale up its GPU inventory to 32,768 units by the end of 2025. Last year, the company announced that it would import 24,000 GPUs, including NVIDIA H100s and L40S, in a phased manner.
Huang also announced NVIDIA Spectrum-X, an advanced Ethernet networking platform specifically designed to enhance the performance and efficiency of AI workloads.
NVIDIA is betting big on autonomous vehicles. “We supported Tesla’s expansion of its training AI cluster to 35,000 H100 GPUs. Their use of NVIDIA AI infrastructure paved the way for the breakthrough performance of FSD Version 12, their latest autonomous driving software based on Vision,” said Kress.
Similarly, in a recent interview with Yahoo Finance, Huang stated that besides the cloud industry, the primary users of NVIDIA’s data-center chips are from the automotive industry. He then emphasised about autonomous cars and their advancements.
“Tesla is far ahead in self-driving cars, but every single car, someday, will have to have autonomous capability,” said Huang.
.
The post NVIDIA Controls a Whopping 95% of the AI Chip Market appeared first on AIM.
Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world of machine learning, along with notable research and experiments we didn’t cover on their own. By the way, TechCrunch plans to launch an AI newsletter […]
Genpact, in collaboration with HFS Research, recently released a report called ‘The GenAI Countdown.’ This new report highlights a notable concern: Many businesses are narrowly focusing on productivity, potentially negatively impacting employee experiences and stifling broader innovation.
The report stated that 52% of respondents expressed concerns that an overemphasis on productivity could lead to negative impacts on employee experience.
Genpact’s global head of AI/ML practice, Sreekanth Menon, said that companies should not pursue productivity just as an end goal but as a critical step in a broader, more strategic GenAI journey.
“We’re initially focused on leveraging GenAI for internal improvements and productivity because this will give us the confidence to expose it to our customers, knowing our internal experts have thoroughly tested it,” said Menon, saying that this initial focus on productivity builds trust and internally verifies the technology’s reliability before implementing it in broader, more impactful ways.
In a bid to gain deeper insights into GenAI’s long-term value and the risks of overemphasising immediate outcomes such as productivity improvements, HFS Research, in partnership with Genpact, a global professional services firm, surveyed 550 senior executives worldwide.
These respondents represent organisations with revenue exceeding $1 billion across 12 countries and eight industries. For further insights, they conducted in-depth interviews with ten enterprise leaders at the forefront of GenAI adoption and innovation.
Click here to check out the interactive dashboard.
Here are Key Highlights and Takeaways from ‘The GenAI Countdown’ report:
Most companies are early in their GenAI journey but are making significant investments, with 61% of executives allocating up to 10% of their tech budgets to GenAI, alongside a 30% increase in generative AI funding over the next year.
There is a significant shift toward partnerships for building GenAI capabilities, with 45% investing in technology providers specialising in GenAI solutions.
Top challenges include data quality or strategy (42% for IT leaders) and a lack of a structured GenAI plan (46% for business leaders).
36% of executives reported a shortage of skilled professionals.
Organisations are a bit hesitant about data sharing, with only 16% of enterprises using their proprietary data.
80% of executives recognised the need to transition to performance- and purpose-driven commercial models, moving away from traditional time-and-material models, with partners to fully capitalise on GenAI’s potential.
74% of executives believe that GenAI will inspire new and disruptive ways of value creation within their organisations.
Phil Fersht, CEO and chief analyst at HFS Research, described GenAI as “the first real innovation to disrupt industry since the advent of the internet,” suggesting that it’s transforming business interactions and operational models.
Overall, these insights reflect GenAI’s growing importance in reshaping business strategies and the imperative to adopt a balanced approach that integrates technology with human-centric frameworks.
The post Genpact Report on ‘The GenAI Countdown’ Warns Against Narrow Productivity Focus, Urges Innovation and Enhanced Employee Experiences appeared first on AIM.
Google AI Overview’s latest reply to ‘cheese not sticking to pizza’ has taken the internet by storm, making users question the potential of its generative AI search experience.
This is not the first time. During its initial lab testing phase, known as the Search Generative Experience (SGE), it asked users to ‘drink a couple of litres of light-coloured urine in order to pass kidney stones.’
Google also upset Indian IT Minister Rajeev Chandrasekhar after Gemini expressed a biased opinion against India’s Prime Minister Narendra Modi.
The trend continues. A few months ago, the tech giant had to temporarily suspend image-generating features after Gemini inaccurately depicted people of colour in Nazi-era uniforms, showcasing historically inaccurate and insensitive images.
Who better to explain than German AI cognitive scientist Joscha Bach?
Last month, at the AICamp Boston meetup, he discussed the implications of Google Gemini and how it reflects societal biases, influencing the system to give inaccurate results or outputs.
He said Gemini, despite not being explicitly programmed with certain opinions, ended up exhibiting biased behaviour, such as altering images to promote diversity but, as mentioned earlier, inadvertently depicting Nazi soldiers as people of colour.
He believes that this bias wasn’t hardcoded but inferred through the system’s interactions and prompts.
Bach said that Gemini’s behaviour reflects the social processes and prompts fed into it rather than being solely algorithmic. He said that the model developed opinions and biases based on the input it received, even generating arguments to support its stance on various issues like meat-eating or reproduction.
He highlighted the potential of such models for sociological study, as they possess a vast understanding of internet opinions. Instead of focusing solely on cultural conflicts, he suggested viewing these AI behaviours as mirrors of society, urging a deeper understanding of our societal condition.
Similarly, in ‘You Are to be Blamed for ChatGPT’s Flaws, AIM stressed that the cycle of misinformation is mostly driven by human inputs and interactions, not just AI capabilities.
Simply put, the responsibility for misinformation generated by AI largely falls on content creators, media platforms, and users rather than the technology itself. Essentially, human actors play a significant role in perpetuating misinformation, whether through its creation, dissemination, or failure to verify its accuracy.
That explains why OpenAI has been busy partnering with media agencies.
AI Hallucination as a Feature
These types of AI behaviours lead to what you may call ‘AI hallucinations,’ which many AI experts, including Yann LeCun and alike, have said are an inherent feature of auto-regressive language models (LLMs).
“That’s not a major problem if you use them as writing aids or for entertainment purposes. Making them factual and controllable will require a major redesign,” shared LeCun in February last year. Nothing much has changed since.
Some, like OpenAI chief Sam Altman and Elon Musk, consider AI hallucinations creativity, and others believe hallucinations might be helpful in making new scientific discoveries. However, they aren’t a feature but a bug in most cases where providing a correct response is important.
During a chat at Dreamforce 2023 in San Francisco, Altman said AI hallucinations are a fundamental part of the “magic” of systems like ChatGPT, which users have grown fond of.
Altman highlighted the value OpenAI sees in addressing the technical complexities of hallucinations, noting that they offer unique insights. “A crucial aspect often overlooked is the inherent creativity these systems possess through hallucinations,” he explained.
Further, he said that while databases suffice for factual queries, AI’s capacity to generate novel ideas truly empowers users. “Our focus is on balancing this creativity with factual accuracy,” he added.
He emphasised the importance of not restricting AI platforms to only producing content when entirely certain, arguing that such an approach would be shortsighted.
“Opting for absolute certainty at all times would undermine the very essence of these systems,” Altman asserted. “While it’s tempting to impose strict guidelines, doing so would strip away the enchanting unpredictability that users find so captivating.”
The same goes for Grok, where Elon Musk is looking to include ‘Fun Mode,’ where users will be able to see a humorous take on the news.
Solving AI Hallucinations
This aside, however, hallucinations still pose a major problem and research is going into addressing them. A few experts believe a potential architectural solution, such as vector databases, particularly vector SQL, can reduce hallucinations.
LLMs often generate misinformation due to their statistical nature, but vector databases offer a way for LLMs to query human-written content for accurate responses.
Vector SQL involves LLMs querying a database instead of solely relying on their training data, thereby reducing hallucinations. The process involves LLMs generating SQL queries for user natural language queries, which are then converted by a vector SQL engine. This method improves the efficiency, flexibility, and reliability of AI-generated content.
Despite existing similar methods, vector SQL presents a novel approach to mitigating hallucinations. For instance, Microsoft’s Bing Chat uses a system called Prometheus, which combines Bing’s search engine with OpenAI’s GPT models to reduce inaccuracies and provide linked citations for confidence in responses.
With advancements like vector SQL, the era of hallucination-free LLMs might be on the horizon, offering more reliable and accurate AI-generated content.
Further, advanced techniques like CoVe by Meta AI, knowledge graph integration, RAPTOR, conformal abstention, and reducing hallucination in structured outputs via RAG. as well as simply using more detailed prompts, are emerging to mitigate LLM hallucinations.
The post Google AI Overview’s Suggestion of Adding Glue to Pizza Says a Lot About Human Flaws appeared first on AIM.
The ongoing discussions around AI’s impact on workplace dynamics have meandered towards personal productivity. Recently, major players like OpenAI, Google, and Microsoft have unveiled innovative tools to maximise efficiency in professional life.
According to Microsoft and LinkedIn’s latest findings from the 2024 Work Trend Index, 92% of knowledge workers in India use AI at work compared to the global figure of 75%. This reflects employee confidence in AI to save time, and boost creativity and focus.
Already, leaders like OpenAI CEO Sam Altman and Microsoft’s Satya Nadella think AI systems act like “senior employees” and “project managers”.
Here’s a look at five cutting-edge AI tools that can revolutionise productivity at the workplace.
Google AI Teammate
Unveiled at Google I/O, the ‘AI Teammate’, powered by Gemini, promises to alleviate workloads by automating routine tasks and engaging in team communications.
Gemini has introduced Gmail Q&A functionality, enabling users to interact with the AI to gather specific information from their inbox swiftly, further enhancing the efficiency of email interactions.
8. An AI teammate that lives inside Google workspace to do collaborative tasks pic.twitter.com/M21dB9ThmG
— Chief AI Officer (@chiefaioffice) May 14, 2024
Gemini for Workspace
Integrated into Gmail’s mobile app, Gemini for Workspace offers streamlined email management through summarisation and Contextual Smart Reply suggestions. Its application extends beyond email to various workflows within Workspace and other Google apps, promising seamless automation and enhanced productivity.
At Google I/O, Aparna Pappu, the vice president of Google Workspace, highlighted the successful integration of Gemini for Workspace at Sports Basement, California, resulting in a notable enhancement of the organisation’s customer support team productivity by over 31%.
The new updates will be initially accessible to Workspace Labs and Gemini for Workspace Alpha users, with broader availability anticipated for all Gemini for Workspace customers and Google One AI Premium subscribers in the next month.
Recall
Microsoft introduced Recall, a semantic search feature embedded within Windows PCs, enabling deep exploration of digital histories to recreate past moments.
Windows PCs begin capturing screenshots of users’ activities, feeding this data into a sophisticated AI model embedded directly within the devices. Through neural processing, every image and interaction will become searchable, even extending to photographs.
Team Copilot
Microsoft’s Team Copilot serves as a meeting facilitator within Teams meetings and chats. It offers facilities like summarising meetings by taking notes, suggesting follow-up tasks based on discussions, ensuring meeting structure by tracking time for agenda items.
It can also moderate group chats, respond to queries from shared documents, and act as a project manager in Planner by assigning tasks and goals, and providing advice to users to progress tasks.
With its ability to participate in meetings, chats, and task management, Team Copilot streamlines collaboration and enhances productivity within teams.
Furthermore, its capability to respond to questions based on shared documents and offer insightful advice further solidifies its role as an indispensable asset in team dynamics, facilitating seamless communication and task execution.
ChatGPT Desktop App
The ChatGPT desktop app, available for both free and paid users, offers seamless integration into users’ workflow.
With features like real-time screen reading and instant question-answering, this app enhances productivity by providing quick access to information and facilitating collaborative discussions.
The ChatGPT desktop app just became the best coding assistant on the planet. Simply select the code, and GPT-4o will take care of it. Combine this with audio/video capability, and you get your own engineer teammate. pic.twitter.com/g4fWcbhXy2
— Pietro Schirano (@skirano) May 13, 2024
The post 5 New AI Features that will Take Your Productivity to the Next Level appeared first on AIM.
The EU AI Hub, launched last week by AI security firm Cranium with KPMG and Microsoft, is a service designed to assist businesses in complying with the newly adopted EU AI Act. With expert advice and bespoke technologies, users will be taken through a series of steps to identify what parts of the AI Act apply to their products and what they need to do to comply.
On March 13, 2024, the European Union Parliament voted the AI Act into law. This means businesses that offer AI products in the region will soon need to abide by its strict rules regarding facial recognition, safeguards and consumer complaints and questions.
While the EU AI Act won’t come into force until late 2024 at the earliest, many companies are looking into complying with its requirements to ensure they are prepared and do not incur any penalties. However, navigating such comprehensive regulations is no mean feat, and this is why the EU AI Hub was created.
SEE: 8 AI Business Trends in 2024, According to Stanford Researchers
What is the EU AI Hub?
The EU AI Hub is a service designed to take global organisations through a series of steps that will help them understand how the EU AI Act regulations apply to their products and comply with and embrace AI responsibly. To achieve these goals, they will be given access to:
KPMG’s Trusted AI Framework and expertise in strategy, transformation, technology, data sciences and assurance.
Cranium’s enterprise AI security platform, which captures the AI Bill of Materials, runs risk reports and performs gap analysis against the EU AI Act framework.
Microsoft’s AI technologies.
“A business’s journey through the Hub will depend on where it currently is in its AI journey, so we will first identify an organisation’s objectives regarding meeting EU compliance requirements,” Daniel Christman, director of AI programs at Cranium, told TechRepublic.
“We’d then identify the path toward bringing a particular AI system or systems into a compliant state, and we would leverage the Cranium technology platform, KPMG services and Microsoft technology and expertise to determine and implement the relevant controls and oversight to achieve compliance.”
Resources provided by the Hub will ensure all of the business’s AI implementations are compliant, practical for their requirements and ethically sound. Businesses can work with experts from the initial strategy and design of AI technologies all the way through deployment and optimisation, using input from regulators and relevant stakeholders.
Christman is currently unsure about how long it will take a Hub user to reach compliance, though he hopes they will be able to “scale compliance across multiple AI systems much faster” than if they were to attempt it alone.
Sean Redmond, director of the EU AI Hub, said in a press release, “Compliance with the EU AI Act and other regulatory frameworks shouldn’t be seen as a block to innovation/ideation, but instead provide the guardrails that enable organizations to experiment with AI and deliver value to their businesses and customers.”
How much does it cost to use the EU AI Hub?
“Pricing will flex based on what the business is looking to achieve in the Hub,” Christman told TechRepublic. “Simply leveraging some of the expertise and knowledge will be no to minimal cost, with more intensive service provision and technology implementation bringing additional investment.”
Which businesses should consider using the EU AI Hub?
The EU AI Act will apply directly to businesses located in the 27 EU member states and any businesses with customers in those states, regardless of their location. These businesses could be providers, deployers, importers or distributors of AI systems and may consider using the EU AI Hub to ensure compliance.
Christman told TechRepublic, “Many global businesses are still struggling to get their AI systems ready. Given that the final requirements only recently passed the final legal hurdles, this is somewhat to be expected — but it will still be a challenge for organisations to scale compliance across the enterprise.
“Primarily, organisations have a major challenge in capturing the full inventory of AI systems being developed internally, as well as those included in third-party tools and services.”
Developers of AI systems deemed to be “high risk” will have to meet certain obligations to comply with the AI Act, including the mandatory assessment of how their AI systems might impact the fundamental rights of citizens. This applies to the insurance and banking sectors, as well as any AI systems with “significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law.”
Providers of general-purpose AI systems must also meet certain transparency requirements under the AI Act; this includes creating technical documentation, complying with European copyright laws and providing detailed information about the data used to train AI foundation models. The rule applies to models used for generative AI systems like OpenAI’s ChatGPT.
What is the deadline for compliance with the EU AI Act?
While the AI Act was approved in March, there are still a few steps to be taken before businesses must abide by its regulations. The EU AI Act must first be published in the EU Official Journal, which is expected to happen in June or July this year. It will enter into force 20 days after publication, but the requirements of the AI Act will apply in stages:
Bans on prohibited practices will apply six months after entry into force, so approximately December 2024.
Codes of practice will go into effect nine months after entry into force, so approximately March 2025.
General-purpose AI rules, including governance, will go into effect 12 months after entry into force, so approximately June 2025.
Obligations for high-risk systems will go into effect 36 months after entry into force, so approximately June 2027.
The EU AI Act will apply in its entirety 24 months after its entry into force.
What are the penalties for breaching the EU AI Act?
Companies that fail to comply with the EU AI Act face fines ranging from €35 million ($38 million USD) or 7% of global turnover, to €7.5 million ($8.1 million USD) or 1.5% of turnover, depending on the infringement and size of the company.
Google recently introduced its newest text-to-video AI model, Veo, to compete with OpenAI’s Sora. Unveiled during Google I/O, Veo builds upon techniques from prior video models to improve consistency, quality, and resolution, providing high-quality 1080p videos that exceed one minute in length and showcase impeccable quality.
While some were impressed with Veo’s capabilities, others argue that it may not exactly be state-of-the-art regarding its latency or abilities compared to Sora.
Never bet against @Google! They just dropped a Sora competitor. 1080p, over a minute long vids, and impeccable quality pic.twitter.com/XtOt7ftS46
— Andrew Gao (@itsandrewgao) May 14, 2024
Google invited multiple filmmakers to experiment with Veo and even aired a short film by Donald Glover at I/O 2024.
On the other hand, Open AI is pitching its video AI Sora to Hollywood and has plans to release it publicly later this year, potentially integrating it with video editing software like Adobe Premiere Pro.
Is Veo better than Sora?
The comparison between the two models is still a topic of debate, as neither model has been released yet. Nonetheless, many believe that Veo will be a strong competitor to Sora.
A demo that has been shared several times features a lone cowboy riding across an open plain during a beautiful sunset. This image feels reminiscent of videos showcased by OpenAI’s Sora, highlighting potential similarities between the two models.
Prompt: “A lone cowboy rides his horse across an open plain at beautiful sunset, soft light, warm colors.” pic.twitter.com/D8uKDZVWto
— Google DeepMind (@GoogleDeepMind) May 14, 2024
Let’s take a look at some of Veo’s standout features.
Realism and Visual Control
Veo’s flexibility is showcased through its ability to adapt to diverse user inputs and prompts effectively, adding an extra layer of realism to generated videos. Veo offers an exceptional level of consistency, coherence, and realism, which sets it apart from other platforms by providing videos with superior visual quality.
Additionally, the neural network can allegedly understand prompts for various cinematic effects, allowing users to include filmmaking terms such as “time-lapse,” “aerial shot,” and “panning shot” in their descriptions to achieve the desired motion accurately.
On the other hand, Sora utilises advanced algorithms and deep learning techniques to create videos, often resulting in slight variations between frames. Unlike Veo, videos created with Sora frequently have distorted intricate details.
Ease of Use
As mentioned before, the Veo model understands complex camera movements and visual effects specified in prompts, such as “pan,” “zoom,” or “explosion.” This capability simplifies the video creation process for users, allowing them to create dynamic narratives effortlessly.
While Sora offers similar features, Veo stands out by emphasising user control, which enhances the overall ease of use for those looking for a seamless content creation experience.
Video Length Continuity
Users can effortlessly extend video lengths with a simple click, enhancing the overall viewing experience. Moreover, Veo ensures that each frame maintains continuity, avoiding the jarring transformations or artefacts commonly seen in Sora-generated content.
In contrast, Sora’s approach to visual quality can introduce subtle inconsistencies between frames due to its underlying algorithms. This difference becomes apparent when examining the intricate details within videos.
Meanwhile, according to reviews, Veo excels in preserving characters, objects, and styles seamlessly. By leveraging cutting-edge latent diffusion transformers, Veo minimises discrepancies effectively, resulting in visually stunning and lifelike video outputs.
Maintaining Video Sequences
Veo boasts an array of impressive capabilities, including the ability to edit existing videos using text commands, ensuring visual consistency across frames, and generating video sequences lasting over 60 seconds from a single prompt or a series of prompts forming a narrative.
“When given both an input video and editing command, like adding kayaks to an aerial shot of a coastline, Veo can apply this command to the initial video and create a new, edited video,” the company claimed.
On the other hand, Sora has distinguished itself by producing highly detailed and realistic short video clips. However, it falls short in comparison to Veo as it currently lacks the advanced video editing and narrative generation features that Veo is purported to possess.
A user on Reddit said, “Notice that the Veo demo doesn’t show a single human face, or any human bodies except in complete silhouette. Compare Veo to Sora in terms of the amount of movement, level of detail, diversity of style, and ability to merge concepts. It’s not close.”
These discussions about which one is better will continue as mentioned before, but there won’t be a clear winner until the models are made available to the public.
openai sora vs google veo pic.twitter.com/c20oQ5r4I4
— The Technology Brother (@thetechbrother) May 14, 2024