TIOBE Index News (May 2024): Why is Fortran Popular Again?

Fortran was in the top 10 of the TIOBE Programming Community Index in April and May, prompting a question from the TIOBE community. Why is this programming language, created in 1957, popular again in 2024? Otherwise, there hasn’t been much movement in the top 10 rankings compared to last month.

The TIOBE Programming Community Index shows trends in programming languages based on search engine volume.

Trends year-over-year from the TIOBE Programming Community Index.
Trends year-over-year from the TIOBE Programming Community Index. Image: TIOBE Software

Why Fortran is back in TIOBE’s top 10

TIOBE Software CEO Paul Jansen noted there are multiple reasons why Fortran is popular again.

First, Fortran is especially good at numerical analysis and computational mathematics. Numerical and mathematical computing is growing because interest in artificial intelligence is growing, Jansen told TechRepublic in an email.

“All those models need to be calculated,” Jansen said.

Fortran has some advantages over the other programming languages that can be used in numerical and mathematical computing. Jansen wrote in the May update of the index: “Python: choice number one, but slow, MATLAB: very easy to use for mathematical computation but it comes with expensive licenses, C/C++: mainstream and fast, but they have no native mathematical computation support, R: very similar to Python, but less popular and slow, Julia: the rising new kid on the block, but not mature yet.” Therefore, Fortran is a relatively inexpensive, fast and versatile choice.

Second, Fortran is regaining popularity in the area of image processing, such as that used in gaming and medical imaging, he said.

Fortran is still being updated; the ISO Fortran 2023 specification definition was published in November 2023.

The venerable language is “fast, having native mathematical computation support, mature, and free of charge,” Jansen said.

Other changes in the TIOBE Index in May

Elsewhere in the TIOBE top 10, Python, C and C++ comfortably keep their spots in the top three. Last month, Jansen predicted PHP’s popularity would fade, and he was right. PHP sat at 1.09% popularity in April and 0.97% in May, continuing its downward trend. PHP fell from number eight to number 16 over the last year.

The Era of Synthetic Politics: Examining the Impact of AI-Generated Campaign Messages

Discover how AI-generated messages, ethical concerns, and real-world examples are shaping the future of politics

Synthetic politics represents the fusion of technology and political processes, driven by the substantial influence of Artificial Intelligence (AI) and advanced technologies on political campaigns, messaging dissemination, and voter engagement. In this era of rapid technological advancement, traditional norms are being challenged, introducing novel dynamics that reshape politics.

The integration of AI into political campaigns has been both gradual and profound. Initially, campaigns relied on simple online banners and email outreach, but as technology evolved, AI algorithms began analyzing extensive data to tailor messages to individual voters. This evolution has led to an era where AI-powered chatbots engage with voters, personalized advertisements target specific demographics, and predictive models optimize campaign strategies. While adopting AI in politics offers new opportunities, it also raises ethical concerns, urging a serious examination of its impact.

The Digital Transformation of Political Campaigns

Digital technologies have transformed political campaigns, shifting from traditional methods like billboards to digital mediums such as social media and apps. This shift has broadened information access, allowing campaigns to reach vast audiences quickly.

AI plays a central role in shaping political messaging. Machine learning algorithms analyze voter behavior, preferences, and sentiments to tailor messages effectively. AI-powered chatbots engage with potential voters, providing information and gathering data. Moreover, AI enables micro-targeting, allowing campaigns to deliver personalized messages to specific demographics based on their online activities, interests, and affiliations. By automating tasks and personalizing interactions, AI enhances the efficiency and effectiveness of political campaigns.

At the core of AI-driven campaigns lies data. Voter profiles, social media interactions, and historical voting patterns are input into algorithms to craft messages that resonate with individual voters. This data-driven approach enables campaigns to optimize resource allocation, strategically allocate advertising budgets, and predict electoral outcomes. However, concerns about the ethical use of data persist, as privacy breaches and algorithmic biases can potentially undermine democratic principles. In fact, balancing AI’s advantages with ethical considerations is essential. Therefore, policymakers, technologists, and citizens must cooperate to benefit from AI without compromising democratic integrity.

The Mechanics of AI-Generated Messages

AI-generated messages are created through data analysis and machine learning algorithms. These systems process vast datasets to identify voter preferences and behaviors patterns, analyzing past campaign data, social media activity, and demographic information. By doing so, AI can determine the most effective messaging strategies for different voter segments. The process involves training models on successful political speeches and communication, enabling them to generate messages that resonate with targeted audiences.

AI’s targeting extends to individual personalization, producing messages based on voters' unique interests and sentiments. This precision makes messages more relevant and engaging, potentially improving voter response. However, it also raises privacy and ethical concerns regarding the use of personal data in campaigns.

Globally, political campaigns utilize AI for message generation. For example, AI has been used to personalize email campaigns and social media ads in the United States. Similarly, in Europe, political parties have employed AI to analyze voter sentiment and adjust their messaging accordingly. In countries like India and Brazil, where mobile usage is high, AI-powered chatbots have been used to interact with voters and disseminate campaign information.

The Ethical and Social Impact of AI in Politics

AI's political role brings ethical and social challenges, notably the risk of deepening political polarization. AI's personalized content can trap voters in echo chambers, limiting exposure to varying views and fostering division. This could lead to a less informed electorate.

Another concern is the integrity of AI-generated political messages. The potential for AI to spread misinformation or biased content under the mask of genuine communication threatens the political process’s trustworthiness. Transparency and truthfulness in AI messages are essential and thus require clear labelling and stringent fact-checking.

Consequently, regulatory frameworks must address these issues, balancing innovation with accountability. Regulations should set data privacy standards, mandate consent for personalization, and combat misinformation. Using Responsible AI in politics guided by policymakers can minimize its adverse effects on campaigns.

AI's Influence on Elections: Real-World Examples

In recent elections worldwide, the influence of AI has been significant. From the United States to Kenya, AI has been used to micro-target voters and optimize campaign resources.

The 2016 US presidential election and subsequent events have spotlighted the multifaceted influence of digital technologies on voter decisions and political campaigns. The election's aftermath, particularly the losing candidate's reflections in her memoir, highlighted the significant role of disinformation in shaping public sentiment and altering political dynamics.

This period also saw the emergence of concerns over the use of AI-generated deepfake content, which was highlighted in a Byline Times report. The report highlighted the limited capacity of UK election oversight bodies to address such content, revealing vulnerabilities in the political landscape to potential manipulation by AI-generated forgeries.

Furthermore, the Cambridge Analytica scandal served as a reminder of the risks associated with data misuse in politics. The unauthorized collection and use of Facebook user data for political advertising demonstrated how AI and data analytics could be exploited to manipulate public opinion and interfere with democratic processes. This incident shows the urgent need for stringent data privacy laws and ethical standards in political campaigning, sparking a global conversation on the ethical implications of digital technologies in elections.

Navigating the Future of Political Campaigning

As AI technologies like natural language generation and deep learning advance, they are set to revolutionize political campaigning. These technologies promise sophisticated personalization, sentiment analysis, and predictive modelling, potentially engaging voters in novel ways while raising ethical questions about privacy and consent.

The rapid developments in AI pose challenges for legal and regulatory frameworks, necessitating proactive legislation to safeguard voter rights and ensure innovation. This includes data protection, algorithmic transparency, and AI accountability in campaigns. AI’s integration into politics may shift societal norms, affecting voter engagement and public discourse, with more online interaction and potential echo chambers.

In addition, regulating AI in campaigns demands collaboration among tech companies, governments, and civil society to establish ethical standards and educate the public. This collective effort can promote innovation while upholding democratic integrity.

The Bottom Line

In the age of synthetic politics, the integration of AI has undeniably transformed political campaigning. While AI presents unprecedented opportunities for engagement and efficiency, it also poses significant ethical and social challenges.

As we progress further in the AI-driven era, transparency, accountability, and media literacy emerge as important pillars for maintaining trust in democratic processes. By promoting collaboration and implementing responsible practices, we can utilize the power of AI while safeguarding the integrity of political discourse.

Integrating microservices with legacy systems through API management

In software architecture, the shift towards microservices has become a dominant trend. Microservices offer agility, scalability, and resilience, making them an attractive choice for modernizing IT infrastructures. However, many organizations grapple with the challenge of integrating microservices with their existing legacy systems. These legacy systems, often monolithic in nature, have been the backbone of enterprises for years, containing valuable data and business logic. The question arises: How can organizations seamlessly integrate microservices with legacy systems while preserving their investments and ensuring a smooth transition to a modern architecture? The answer lies in API management.

The role of API management in integrating microservices with legacy systems

  1. Standardized communication: Legacy systems often lack standardized interfaces for communication, making integration with modern microservices challenging. API management platforms act as intermediaries, offering a consistent set of APIs that abstract the complexities of legacy systems. Through API management, microservices can communicate with legacy systems using well-defined protocols and data formats, ensuring interoperability and compatibility.
  1. Legacy system exposition: Many legacy systems are not designed to expose their functionalities as services or APIs. API management platforms enable organizations to expose legacy system functionalities as reusable APIs, encapsulating business logic and data access mechanisms. This abstraction layer shields microservices from the intricacies of legacy systems, allowing them to consume services without direct dependencies.
  1. Security and compliance: Integrating microservices with legacy systems raises security concerns related to sensitive data. API management platforms offer robust security features such as authentication, authorization, and encryption, ensuring that data exchanged between microservices and legacy systems is protected. Moreover, API management facilitates compliance with regulatory requirements by enforcing policies and access controls across the integration landscape.
  1. Transformation and mediation: Legacy systems often use outdated technologies and data formats incompatible with modern microservices. API management platforms enable data transformation and mediation, converting data between different formats and protocols as required. This transformation layer ensures seamless interoperability between microservices and legacy systems, regardless of technological disparities.
  1. Scalability and performance: Legacy systems may struggle to handle the scalability demands imposed by microservices architecture. API management platforms alleviate this burden by providing capabilities such as caching, load balancing, and throttling. These features optimize performance and scalability, ensuring that microservices can interact with legacy systems efficiently and reliably.
  1. Real-time integration: In today’s fast-paced business environment, real-time integration between microservices and legacy systems is essential. API management platforms offer event-driven architectures and messaging capabilities that enable real-time communication and data synchronization. This real-time integration ensures that microservices and legacy systems remain synchronized and responsive to changing business needs.
  1. Legacy system modernization: API management platforms play a pivotal role in the gradual modernization of legacy systems. By encapsulating legacy functionalities as APIs, organizations can incrementally replace legacy components with microservices while maintaining backward compatibility. This phased approach to modernization minimizes disruption and mitigates risks associated with large-scale system overhauls.

Challenges and how to overcome them

Despite the benefits, integrating microservices with legacy systems poses several challenges. Here are some common hurdles organizations may face and strategies to overcome them:

  1. Legacy system complexity: Legacy systems often have complex architectures, undocumented code, and dependencies, making it challenging to understand their inner workings. To overcome this challenge, organizations should conduct comprehensive legacy system assessments to identify dependencies, document functionalities, and prioritize components for integration. API management platforms can help abstract the complexity of legacy systems through a standardized interface for communication, shielding microservices from underlying intricacies.
  1. Data synchronization: Maintaining data consistency and synchronization between microservices and legacy systems can be challenging, especially in real-time scenarios. Organizations should implement data synchronization mechanisms such as event-driven architectures, message queues, or data replication strategies to ensure data consistency across the integration landscape. API management platforms with real-time integration capabilities facilitate data synchronization by providing event-driven architectures and messaging capabilities.
  1. Security risks: Integrating microservices with legacy systems introduces risks, such as data hacks and compliance violations. To mitigate these risks, organizations should implement robust security measures, including authentication, authorization, encryption, and auditing. API management platforms offer security features that enforce policies and access controls, ensuring secure communication between microservices and legacy systems. Additionally, regular security assessments and audits can help identify and address vulnerabilities in the integration landscape.
  1. Performance bottlenecks: Legacy systems may struggle to handle the scalability and performance demands imposed by microservices architecture, leading to performance bottlenecks and service degradation. To address this challenge, organizations should optimize legacy system performance through techniques such as caching, load balancing, and resource optimization. API management platforms provide scalability features such as caching, load balancing, and throttling, optimizing performance and ensuring seamless interaction between microservices and legacy systems.
  1. Organizational resistance: Resistance to change within the company can impede the integration of microservices with legacy systems. To overcome organizational resistance, organizations should foster a culture of collaboration, communication, and continuous learning. Leadership support, stakeholder engagement, and clear communication of the benefits of integration can help alleviate concerns and garner support for the transition to a modern architecture. Additionally, providing training and resources to teams involved in the integration process can empower them to embrace the change and contribute to its success.

Conclusion

In conclusion, API management serves as a critical enabler in integrating microservices with legacy systems. By providing standardized communication, security, transformation, scalability, and modernization capabilities, API management platforms bridge the gap between the old and the new, facilitating a seamless transition to a modern, microservices-based architecture. As organizations pursue a digital transformation journey, using API management becomes crucial to unlock the full potential of microservices while leveraging existing investments in legacy systems.

Snowflake’s Strategic Acquisition of Neeva Pays Off

Last year, Snowflake acquired Neeva, which turned out to be one of its best decisions.

Sridhar Ramaswamy founded Neeva, an ad-free, privacy-focused search engine, alongside another ex-Google executive, Vivek Raghunathan, in 2020. At the time of its last funding round in 2021, Neeva was valued at about $250 million.

Fast forward to the present, Snowflake is a household name for enterprises. The company recently introduced a slew of open-source models, including Arctic LLM, designed for enterprises that want to use large language models (LLMs) to create conversational SQL data copilots, code copilots, and RAG chatbots.

Credit for this goes to Sridhar Ramaswamy, who, earlier this year, became Snowflake’s new CEO. Since assuming the position, the company has transformed from a mere data management service provider to a data and AI-driven entity with a strong emphasis on generative AI.

“I think it’s a huge opportunity in the world of data applications and AI. It will keep me busy for many years to come,” said Ramaswamy in a recent interview after taking the helm at Snowflake.

In an exclusive interview with AIM, Snowflake head of AI Baris Gultekin said that he had worked with Ramaswamy for over 20 years at Google, calling him an incredible leader. “Sridhar brings incredible depth in AI as well as data systems. He has managed super large-scale data systems and AI systems at Google,” Gultekin said.

Neeva’s expertise in generative AI and LLMs, now integrated into the Snowflake Data Cloud, has enhanced Snowflake’s AI capabilities. Especially in natural language processing and search functionalities within its cloud data platform.

“Neeva is an important acquisition for Snowflake. We are integrating many things from Neeva into Snowflake’s offerings, the most obvious one of which is Snowflake’s Universal Search product,” said Gultekin.

Universal Search helps customers quickly and easily find database objects in their account, data products available in the Snowflake Marketplace, relevant Snowflake Documentation topics, and Snowflake Community Knowledge Base articles.

Snowflake’s Generative AI Prowess

While there are several generative AI models out in the market, Snowflake has selected its niche in targeting enterprise customers. Recently, the company made Snowflake Cortex generally available.

Cortex grants access to pre-trained LLMs from various providers, including Snowflake’s own Arctic LLM. These models can perform tasks like text summarisation, sentiment analysis, question answering, and code generation, all within the Snowflake environment.

Moreover, Cortex offers pre-built SQL functions that enable users to perform machine learning tasks on their data without extensive coding expertise. These functions handle tasks like classification, regression, and anomaly detection.

Currently, Snowflake Arctic outperforms leading open models such as DBRX, Llama 2 70B, Mixtral-8x7B, and more in coding (HumanEval+, MBPP+) and SQL generation (Spider and Bird-SQL), while also providing superior performance in general language understanding (MMLU).

Snowflake has also partnered with Mistral, Meta, and Reka to host their LLMs on Cortex. “We’ve partnered with Landing AI, AI21 Labs, and other capable partners to build amazing products. They’re important to us as they allow us to provide choices to our customers,” said Gultekin.

Gultekin further said that Snowflake is developing LLMs at a very affordable price, prioritising the security of their customers’ data. “Despite using a 17x less compute budget, Arctic is on par with Llama 3 70B in language understanding and reasoning while surpassing in Enterprise Metrics,” said Gultekin

Additionally, he said that they had 10,000 customers entrusting Snowflake with their sensitive data. With this in mind, he emphasised that all the LLMs that they operate are within strict security parameters, meaning no data leaves and everything remains secure.

Moreover, he added that even though Arctic LLM is orders of magnitude smaller compared to OpenAI, the benchmark proves that they excel in document understanding and question answering with their document data model.

​​Snowflake recently introduced Document AI to extract valuable content from unstructured data like PDFs, images, and videos. Powered by Arctic-TILT, a multimodal large language model, it offers efficient content extraction for enterprises.

“We’re just getting started. There’s a lot to build. I’ll say the core use cases for us are being able to talk to data and how we can make that a lot better and a lot easier,” concluded Gultekin, saying they put out a whole pile of products just recently for public preview. This included a series of chat products that are able to chat with structured data.

Snowflake Is Not Alone

Coincidentally, Snowflake’s acquisition of Neeva is similar to Databricks’ acquisition of MosaicML. Naveen Rao, who founded MosaicML, is now the VP of generative AI at Databricks.

MosaicML specialises in optimising machine learning models and has been integrated into Databricks’ offerings to enhance generative AI development.

Recently, Databricks also released its own mixture of expert models, DBRX, built with 132 billion parameters and pre-trained on a dataset of 12 trillion tokens. DBRX outperforms GPT-4, particularly in niche areas like SQL and RAG tasks.

The post Snowflake’s Strategic Acquisition of Neeva Pays Off appeared first on Analytics India Magazine.

SoftBank Eyes Indian Data Centre Investments to Bolster AI Portfolio

SoftBank, the Japanese technology conglomerate, is considering investments in Indian data centre and industrial robotics companies as part of its strategy to capitalise on the infrastructure layer of artificial intelligence (AI). Sources reveal that SoftBank is evaluating potential deals in these sectors and may invest between $75 million and $150 million per deal once discussions come to fruition.

Sources revealed that SoftBank is focusing on AI use cases with significant market potential, with data centres and industrial robotics being the two key themes they are currently pursuing in India. While specific details of the potential investments remain undisclosed, one source suggests that these could involve greenfield data centre businesses of a large corporation or a manufacturing unit leveraging automation and AI.

Renewed Interest in Indian Investments

SoftBank’s potential AI-related investments in India would mark the end of a two-year hiatus in the conglomerate’s deal-making activities in the country. Between late 2018 and mid-2022, SoftBank invested approximately $11 billion in Indian startups and has reported exits worth around $6.5 billion from those investments. The company’s move to explore AI infrastructure companies in India underscores a growing global sentiment of viewing India as an exporter of technology rather than just a large market for capture.

As SoftBank continues to evaluate potential deals in the Indian data centre and industrial robotics sectors, the technology landscape in the country is poised for significant growth and innovation driven by the increasing adoption of artificial intelligence.

Alignment with SoftBank’s Global AI and Chip Strategy

SoftBank’s interest in India’s AI infrastructure aligns with its global shift towards AI and chips, even as the mega investor accelerates its exit from Vision Fund holdings in areas such as e-commerce and fintech. In the fiscal year 2024, SoftBank committed around $9 billion in investments focused on high-growth industries, including logistics, robotics, and autonomous driving.

In October 2023, SoftBank CEO Masayoshi Son expressed his belief that artificial general intelligence (AGI), with a capacity ten times the sum total of human intelligence, will be developed within the next decade. He also introduced the concept of Artificial Super Intelligence, which he claimed would be 10,000 times more powerful than human intelligence and become a reality within the next 20 years.

The post SoftBank Eyes Indian Data Centre Investments to Bolster AI Portfolio appeared first on Analytics India Magazine.

Unleashing a new era of investment banking through the power of AI

Investment banking has become more prevalent, and AI is expected to revolutionize financial transactions. AI’s increasing power has made it a force in all industries, not just the finance sector. AI has revolutionized investment banking day-to-day activities, from automated trading to customer service automation. Read on for a complete overview of how AI is used in investment banking.

How AI uncovers new opportunities for investment banking

AI can help front-office teams find new investment banking opportunities.

Artificial intelligence’s primary function is data analysis. Human brains can only process a limited amount of information, so we are poor at predicting the past or adapting to changing consumer tastes. AI has a way to go until it can solve your entire pipeline problem, but it still increases the likelihood of positive outcomes.

Experts focus on three areas when using AI in investment banking to create new opportunities:

1. Fraud detection

AI is used to detect and stop fraud by monitoring transactions, detecting patterns and suspicious behaviors, and informing authorities. Using today’s most popular technology, AI is best used in Investment Banking to detect fraud. AI and machine learning help banks detect scams, reduce risk, find system gaps, and make online banking more secure.

It helps banks identify real-time suspicious activity, such as money laundering and fraudulent transactions. The system flags high-risk transactions to be reviewed manually by experts. This allows for proactive risk management and compliance with regulatory standards.

2. Automated trading & algorithmic trading

AI algorithms can analyze large amounts of data, identify patterns, and execute trades independently. Many investment banks use AI algorithms to manage their investment

portfolios and execute trades. These algorithms continuously monitor the market and make real-time decisions to maximize investment outcomes.

3. News monitoring and sentiment analysis

Investment banks can use AI to analyze news articles, social media posts, and other information sources to gauge market sentiment and make informed decisions. Another global leader in investment banks uses AI algorithms to monitor social media sentiment and news in real-time, allowing its analysts and traders to stay up-to-date on market trends. The algorithms also add weight to the information and grade it according to its source.

4. Cyber threat detection

Artificial intelligence allows banks to monitor cyber-attacks continuously and respond to them quickly before they affect their workers, customers, or infrastructure. Machine learning supervised is now able to detect malware.

A device-learning-powered application will continuously learn about malicious files using fresh parameters. A cyber security AI detects abnormalities in data transmission patterns. Artificial intelligence is based on machine learning algorithms that monitor networks, detect malicious code, and prevent data breaches.

Banks can use AI to combat cyber threats. Deep learning increased the bank’s ability to detect fraud by 50% and the number of false positives by 60%. The AI-powered system for fraud detection also automated several other essential conclusions. One AI system, the “Black Forest,” examines financial transactions and monitors unusual events. The AI will eventually be able to categorize transactions accurately and only write down those that are a real security risk.

5. Chatbots for customer service

Computer-controlled chat bots with artificial intelligence help customers power their banks by answering questions quickly, providing personalized funding suggestions, and moving conversations forward. Chatbots and other artificial intelligence-based tools are used by businesses to provide the answers their customers need.

Know Your Client (KYC), a process that relies on artificial intelligence to verify client identity, could be improved. The verification accuracy depends on your knowledge of a person’s eyes and face. Chatbots can significantly benefit AI in finance management industry by simplifying customer service, reducing legal tasks, and providing clear instructions.

6. Reporting on regulatory matters

Regulations require that institutions covered by government regulations conduct stress tests to determine their ability to absorb losses in periods of financial strain while maintaining the ability to lend money and meet their obligations to creditors. AI-based models that simulate adverse market conditions can help teams meet stress test requirements. These advanced models combine synthetic data with accurate data, such as past events, current market conditions, and future risks, to create these simulations. Artificial intelligence can also create draft versions for technical documents, such as audit and environmental reports.

Conclusion

AI integration has already brought about significant changes in investment banking. AI has revolutionized how investment banking career paths & work and interact with their clients.

Investment banks need to embrace AI’s opportunities and challenges as it develops. Investment banks are at the forefront of AI’s future by adopting ethical practices and ensuring compliance with regulatory requirements.

Top 10 Must Watch OpenAI GPT-4o Demos 

At the OpenAI Spring Update, OpenAI CTO Mira Murati unveiled GPT-4o, a new flagship model that enriches its suite with ‘omni’ capabilities across text, vision, and audio, promising iterative rollouts to enhance both developer and consumer products in the coming weeks.

With GPT-4o, OpenAI trained a single new model end-to-end across text, vision, and audio, meaning that all inputs and outputs are processed by the same neural network. While introducing the model, OpenAI made several demonstrations to showcase its capabilities. Here, we have cherry-picked the top ones.

For customer service

This was a fun one! Take a look at 2 AI agents resolving a customer service claim with #OpenAI new #GPT4o.
Working with customers to build transformational solutions always gets me fired up. The potential solutions we can build with this new SOTA model has my head spinning! pic.twitter.com/86SNgNI6Tl

— Joe Beutler (@JoeBeutler) May 14, 2024

OpenAI’s GPT-4o is capable of engaging in natural and realistic voice conversations. This capability of ChatGPT makes it an ideal solution for building customer service chatbots, where two AI agents can collaborate to resolve customer service claims.

Real Time Translation

Live audience request for GPT-4o realtime translation pic.twitter.com/VSj5phFKM6

— OpenAI (@OpenAI) May 13, 2024

During the spring update event, OpenAI’s CTO, Mira Murati demonstrated the real-time translation capabilities of GPT-4o, successfully translating Italian to English and vice versa. This feature poses a significant threat to Google Translate and Duolingo, which offer similar services.

Interestingly, Duolingo stock fell 3.5%, wiping out ~$250M in market value, within minutes of OpenAI demoing the real-time translation capabilities of GPT-4o.

Human-Computer-Computer Interaction

Introducing GPT-4o, our new model which can reason across text, audio, and video in real time.
It's extremely versatile, fun to play with, and is a step towards a much more natural form of human-computer interaction (and even human-computer-computer interaction): pic.twitter.com/VLG7TJ1JQx

— Greg Brockman (@gdb) May 13, 2024

GPT-4o can reason across text, audio, and video in real-time. It’s extremely versatile, fun to play with, and is a step towards a much more natural form of human-computer interaction (and even human-computer-computer interaction). In this demo, you can see how OpenAI President Greg Brockman moderated a conversation between two ChatGPTs.

AI Education and Tutor

This demo is insane.
A student shares their iPad screen with the new ChatGPT + GPT-4o, and the AI speaks with them and helps them learn in *realtime*.
Imagine giving this to every student in the world.
The future is so, so bright. pic.twitter.com/t14M4fDjwV

— Mckay Wrigley (@mckaywrigley) May 13, 2024

In another demo presented by Khan Academy, a student shared their screen with ChatGPT using GPT-4o. ChatGPT assisted the student step-by-step in solving a mathematical problem. Unlike providing the entire solution at once, ChatGPT guided the student towards the solution. Additionally, students can also share their notebooks using their mobile camera, and ChatGPT will be able to understand the content.

Meeting AI with GPT-4o

One demo that's easy to miss, but I think is significant in what it shows is likely to be possible soon, is this demo — GPT-4o for meetings: https://t.co/UeT5285R9c

— Greg Brockman (@gdb) May 13, 2024

GPT-4o, through the desktop, can join online meetings and moderate them as well, giving its own valuable inputs, which can be crucial in making decisions. Moreover, it can transcribe and summarize meeting discussions in real-time, ensuring that no important details are missed and providing a reliable reference for participants.

Assistant for Visually Impaired Individuals

GPT-4o as tested by @BeMyEyes: pic.twitter.com/WeAoVmxUFH

— Greg Brockman (@gdb) May 14, 2024

BemyEyes, a mobile app designed for visually impaired individuals, tested GPT-4’s vision capabilities to assist a visually impaired person in navigating the city. ChatGPT was able to accurately identify the location and minute details of the surroundings.

Unlike human volunteers who may not be available at all times, GPT-4o can offer continuous support, ensuring that visually impaired users have access to assistance whenever they need it.

Interview Prep

Interview prep with GPT-4o pic.twitter.com/st3LjUmywa

— OpenAI (@OpenAI) May 13, 2024

In this demonstration, ChatGPT helps a candidate prepare for an interview. Using the front camera, ChatGPT can tell whether the candidate is dressed appropriately. Moreover, it can also help with preparations by conducting mock interviews and providing feedback on answers, highlighting strengths and areas for improvement to enhance performance.

Jam with ChatGPT

Lullabies and whispers with GPT-4o pic.twitter.com/5T7ob0ItuM

— OpenAI (@OpenAI) May 13, 2024

GPT-4o has a surprise talent – it can sing! Users can request personalised songs for special occasions like birthdays, anniversaries, or just for fun. The chatbot can generate a variety of tunes and melodies based on emotions or specific details provided by the user, from soft whispers to energetic anthems.

AI Coding Assistant

Live demo of coding assistance and desktop app pic.twitter.com/GlSPDLJYsZ

— OpenAI (@OpenAI) May 13, 2024

OpenAI has introduced the ChatGPT app for desktop. The app allows for voice conversations, screenshot discussions, and instant access to ChatGPT, acting as your friendly, go-to colleague in times of crisis. This is like an AI assistant who is always there to help you out. It can help you out with any problem you come across from writing codes to brainstorming ideas.

Rock, Paper, Scissors with GPT-4o

6. Rock, Paper, Scissors with GPT-4o pic.twitter.com/oMuMRRbrKO

— Angry Tom (@AngryTomtweets) May 13, 2024

With ChatGPT, you can enjoy playing fun games like Rock, Paper, and Scissors, with ChatGPT as the perfect referee. It can also hype you up and cheer for you during the game.

The post Top 10 Must Watch OpenAI GPT-4o Demos appeared first on Analytics India Magazine.

AI Likely to Create More Jobs Than There are People

Contrary to popular opinion, Groq CEO Jonathan Ross believes AI will create more jobs than we can handle. Ross said that the rapid technological advancement in AI could be another case of Jevons Paradox working its magic.

“We keep thinking of each technology as displacing work. One of the things that’s probably going to happen is we will create more jobs for people than we have people,” Ross said, predicting that suddenly there may be a lack of people to do things.

Citing the surge in the use of visual graphics in news articles, he said it has become easier than ever to do so as “most people spend more hours in generating graphics”, hinting at using AI more – and not less – expanding its uses far beyond what was originally intended.

He said this, pointing at the Jevons Paradox, which was first reported in the 1860s by English economist William Jevons in his book The Coal Question.

As steam engines became more efficient, people did not use less coal; instead, they used more of it. This increase in coal utilisation occurred because the more efficient steam engines lowered operational costs, enabling a widespread and intensive use of the engines.

But is it the same?

Ross believes that something similar is happening now. AI advancements are not only making it easier to perform tasks but also reducing costs, resulting in more people using them across sectors and creating more jobs in the process.

“What will probably happen is with most of the things that generative AI makes easy, you will actually see an increase in human activity on that. There’s always going to be someone who’s going to be more entrepreneurial and figure out a way to monetise and get a whole bunch of people working on it,” Ross said.

After all, the World Economic Forum predicted that AI would not only replace 85 million jobs by 2025, but also create 97 million jobs at the same time.

In another report, the WEF listed the types of jobs that will be made obsolete, which included easily automatable jobs like bank tellers, data entry clerks and secretaries. Meanwhile, jobs for AI and machine learning specialists were some of the fastest-growing.

Depends on What You Mean by Automatable

While the study stated that jobs like clerks and secretaries were foremost at the risk of becoming obsolete, others held a different opinion.

In a recent episode of the Ben and Marc Show podcast, Marc Andreessen and Ben Horowitz discussed how AI was more likely to take over the middle management at organisations. Thanks to the ease of training employees as well as a lack of interpersonal issues, the two reasoned that AI could take over managerial jobs rather than employee jobs.

Likewise, OpenAI CEO Sam Altman conceded that the shift towards AI usage in the workplace is occurring differently than expected. “In many cases, this is something that will change the way people do their jobs in the same way that mobile phones, the internet, and computers did,” he said.

Further, he said that the jobs that future generations would be doing would be different, but at the moment, the aim is to try and figure out how to adjust to the speed at which advancement is happening.

What is the solution?

“The world just needs a lot more code than we have people to write right now,” declared Altman, saying, “You hear a coder say I’m like 2-3x times more productive,” rather than the other way around.

Similarly, as Ross stated, the generation of actual jobs will come from more people working in the field of AI and pioneering developments in the field. In doing so and creating their own companies, more people will be hired in the field, resulting in more jobs created.

This is already being seen, according to Stanford’s AI Index Report 2024, where the number of newly funded AI startups increased by 40.6% from 2022. As of 2023, according to the report, as many as 1,812 startups had been newly funded.

With this increase in startups, as well as developers aiming to upskill themselves, it’s likely that the end result will be lower-level workers following suit. This is also backed by the number of organisations, as mentioned in the Stanford report, increasingly making use of AI, regardless of whether the company itself is tech-inclined or not.

While it might not be realistic to expect most people to reskill themselves in accordance with the changing job market, becoming skilled with AI may not be as hard as one would expect.

Several universities have actually begun refining their AI and data science programmes. To the point that they have actually ranked on this year’s QS World Rankings.

Similarly, both universities and companies have begun offering their own free courses online, most of which go into the foundations of AI, machine learning and data science. But while this could help, it doesn’t fully resolve the problem of potential job loss or how we could start skilling for the influx of potential jobs created because of AI.

“The entire point of going into a different age and why you would call it a different technological age, is it breaks all of our intuitions,” said Ross.

The post AI Likely to Create More Jobs Than There are People appeared first on Analytics India Magazine.

India’s AI Spending to Triple to $5 Bn by 2027

India’s AI Spending to Triple to $5 Bn by 2027

According to a recent report by Intel-IDC, unveiled at the AI for India Conference in Delhi today, India’s spending on AI may reach $5.1 billion by 2027. This surge is largely attributed to AI infrastructure provisioning. This includes spending on hardware such as servers and chips (CPUs, GPUs, and accelerators), as well as software components like frameworks and libraries.

In an exclusive interaction with AIM, Santhosh Viswanathan, Vice President and Managing Director, India Region, Intel, said, “With an unmatched talent pool, frugal innovation, and data at scale, India stands poised to lead the global AI revolution,”

Besides infrastructure, the current focus on AI adoption lies in enhanced CX and improved employee productivity, assisting employees to focus on more value-added work and removing mundane, time-consuming tasks.

“India’s commitment to AI, underscored by its proactive approach, drives transformative growth. This positions India as a frontrunner in shaping the future of this technology. Intel recognises this extraordinary opportunity, elevating India as a distinct geography for our business operations. We’re proud to be part of India’s journey towards AI excellence and building an Amazing India”, he added.

While, Jaipur leads in AI job growth among Tier 2 cities, with key challenges including infrastructure investments, unclear business outcomes, compliance, skilling, and cost issues.

Notable, the report suggests that in 2023, the leading sectors in India driving AI expenditure include the BFSI (Banking, Financial Services, and Insurance) and manufacturing sectors. The manufacturing industry, in particular, is anticipated to play a pivotal role in driving the country’s economic expansion, particularly within sectors such as electronics and consumer goods.

Following closely is the manufacturing sector, spurred by the ‘Make in India’ initiative. India’s ambition to establish itself as a global AI-driven manufacturing hub is a key driver behind the significant AI spending in this sector.

However, challenges persist for sectors like government, healthcare, and telecom in quantifying the return on investment (ROI) concerning AI implementation. The complexities arise from difficulties in quantifying intangible benefits, such as enhanced customer satisfaction and augmented decision-making capabilities, in monetary terms.

Further, the government has allocated approximately $30.7 million in the fiscal year 2024-25 to establish three centres of excellence in AI. These centres will concentrate on agriculture, health, and sustainable cities, aiming to tackle specific challenges within these sectors and pioneer groundbreaking AI solutions.

Certain government initiatives such as the National Strategy for AI, Making AI Work for India, and the INDIAai portal are pivotal in driving research and development (R&D) efforts while fostering the adoption of AI across key sectors including healthcare, education, agriculture, and manufacturing.

Moreover, a report jointly released by the IT industry body Nasscom and consulting firm BCG in early February 2024 sheds further light on India’s growing AI market. Projections indicate that by 2027, the AI market in India could potentially reach a remarkable $17 billion, exhibiting an annualized growth rate of 25-35% between 2024 and 2027.

Highlighting India’s prowess in AI, the report underscores the nation’s leading position in terms of skills penetration as over 420,000 employees currently hold positions in AI job functions.

The post India’s AI Spending to Triple to $5 Bn by 2027 appeared first on Analytics India Magazine.

5 Free University Courses to Learn Machine Learning

bala-uni-ml
Image by Author

If you’re interested in a data career, it's important to become familiar with machine learning. With data analysis, you can analyze relevant historical data to answer business questions. But with machine learning, you can take this a step further by building models that can predict future trends based on the available data.

To help you get started with machine learning we've compiled a list of free courses at universities like MIT, Harvard, Stanford, and UMich. I recommend sifting through the contents of the courses to get a feel for what they cover. And then based on what you’re interested in learning, you can choose to work through one or more of these courses.

Let’s get started!

1. Introduction to Machine Learning – MIT

The Introduction to Machine Learning course from MIT covers a range of ML topics in considerable depth. You can access the course contents including the exercises and practice labs for free on MIT Open Learning Library.

From the basics of machine learning to ConvNets and recommender systems, here’s a list of topics that this course covers:

  • Linear classifiers
  • Perceptrons
  • Margin maximization
  • Regression
  • Neural networks
  • Convolutional neural networks
  • State machines and Markov Decision Processes
  • Reinforcement learning
  • Recommended systems
  • Decision trees and nearest neighbors

Link: Introduction to Machine Learning

2. Data Science: Machine Learning – Harvard

Data Science: Machine Learning is another course where you’ll get to learn machine learning fundamentals by working on practical applications such as movie recommendation systems.

The course goes over the following topics:

  • Basics of machine learning
  • Cross-validation and overfitting
  • Machine learning algorithms
  • Recommendation systems
  • Regularization

Link: Data Science: Machine Learning

3. Applied Machine Learning with Python – University of Michigan

Applied Machine Learning in Python is offered by the University of Michigan on Coursera. You can sign up for free on Coursera and access the course contents for free (audit track).

This is a comprehensive course that focuses on popular machine learning algorithms along with their scikit-learn implementation. You’ll work on simple programming exercises and projects using scikit-learn. Here’s the list of topics this course covers:

  • Introduction to machine learning and scikit-learn
  • Linear regression
  • Linear classifiers
  • Decision trees
  • Model evaluation and selection
  • Naive Bayes, Random forest, Gradient boosting
  • Neural networks
  • Unsupervised learning

This course is part of the Applied Data Science with Python specialization offered by the University of Michigan on Coursera.

Link: Applied Machine Learning in Python

4. Machine Learning – Stanford

As a data scientist, you should also be comfortable building predictive models. Learning how machine learning algorithms work and being able to implement them in Python can, therefore, be very helpful.

CS229: Machine Learning at Stanford university is one of the highly recommended ML courses. This course lets you explore the different learning paradigms: supervised, unsupervised, and reinforcement learning. Additionally, you’ll also learn about techniques like regularization to prevent overfitting and build models that generalize well.

Here’s an overview of the topics covered:

  • Supervised learning
  • Unsupervised learning
  • Deep learning
  • Generalization and regularization
  • Reinforcement learning and control

Link: Machine Learning

5. Statistical Learning with Python – Stanford

The Statistical Learning with Python course covers all the contents of the ISL with Python book. Working through the course and using the book as a companion, you’ll learn essential tools for data science and statistical modeling.

Here is a list of the key areas that this course covers:

  • Linear regression
  • Classification
  • Resampling
  • Linear model selection
  • Tree-based methods
  • Unsupervised learning
  • Deep learning

Link: Statistical Learning with Python

Wrapping Up

I hope you found this list of free machine learning courses from top universities useful. Whether you want to work as a machine learning engineer or want to explore machine learning research, these courses will help you gain the foundations.

Here are a couple of related resources you might find helpful:

  • 5 Free Courses to Master Machine Learning
  • Introduction to Statistical Learning, Python Edition: Free Book

Happy learning!

Bala Priya C is a developer and technical writer from India. She likes working at the intersection of math, programming, data science, and content creation. Her areas of interest and expertise include DevOps, data science, and natural language processing. She enjoys reading, writing, coding, and coffee! Currently, she's working on learning and sharing her knowledge with the developer community by authoring tutorials, how-to guides, opinion pieces, and more. Bala also creates engaging resource overviews and coding tutorials.

More On This Topic

  • 5 Free University Courses to Learn Python
  • 5 Free University Courses to Learn Data Science
  • 5 Free University Courses to Learn Databases and SQL
  • 5 Free University Courses to Learn Computer Science
  • 5 Free Stanford University Courses to Learn Data Science
  • Learn Probability in Computer Science with Stanford University for FREE