6 Popular Non-English Programming Languages

English is the Lingua Franca for writing computer programs. There are roughly 8945 programming languages according to the Online Historical Encyclopaedia of Programming Languages (HOPL) and a majority of them are in English since most of the early advancements in technology came from the USA, Britain and Canada and other English-speaking countries.

Some of the newer languages that came from other countries like Python from Netherlands or Lua from Brazil are also in English because of how widespread the use of English syntax is across the world.

There are syntax options that offer localised programming languages like Citrine that allows users to program in their native language. There is also APL, an array based programming language that uses a range of graphic symbols. Quorum and Bootstrap are for individuals that are blind or with sensorimotor disabilities. Though a few languages are localised, they remove the barrier of a foreign language or disabilities for young coders, build on localised documentation and software development.

Popular programming languages other than in English are –

Zhpy

Also known as “Chinese Python,” is a programming language that allows developers to write Python code using Chinese keywords and syntax. It isn’t a separate language but a variation of Python. It uses Python as its backend, which means that Zhpy code can be executed by a Python interpreter.

It allows developers to leverage the existing Python ecosystem and libraries while writing code in Chinese. It is commonly used in mainland China and Singapore. Traditional Chinese characters, used in Hong Kong and Taiwan, are not the focus of Zhpy. It is an open-source project hosted on GitHub, which means that anyone can contribute to its development and improvement.

Ruby

In 1993, Yukihiro Matsumoto created Ruby in Japan. He wanted to create an object-oriented programming language that could also be used for scripting. When it was originally published, Ruby had a Japanese section which was much more comprehensive in the language. The Japanese Ruby community actively contributes to its development and evolution. They organise events, conferences, and meetings, such as RubyKaigi, which is one of the largest Ruby conferences held annually in Japan.

The Japanese Ruby community works on translating documentation, error messages, and programming resources into Japanese, ensuring that Japanese developers can work with Ruby more comfortably. The community has contributed numerous Ruby ‘gems’ (libraries) tailored specifically to Japanese development needs. These gems address various aspects, such as text processing, date and time handling, internationalisation, and more.

Haxe

Haxe is a high-level, cross-platform programming language that is known for its versatility and target platform compatibility. It supports multiple target platforms, including JavaScript, Flash, C++, and more. While Haxe itself is primarily based on English syntax and documentation, it has gained popularity and adoption in various non-English speaking countries for several reasons:

The language has gained popularity in French, German, Spanish, Chinese and Russian languages. These translations, while not comprehensive, provide localised content to help developers from non-English speaking regions better understand and utilise Haxe. Additionally, user groups and events organised in various countries often provide sessions and presentations in local languages, further supporting developers in those regions.

Qalb

Qalb, the Arabic programming language developed by Ramsey Nasser, aims to provide a user-friendly and accessible coding experience for Arabic speakers. It has similar syntax and grammar rules as Lisp and Scheme and other programming languages. Qalb eliminates the language barrier that many Arabic-speaking individuals face when programming in English. It allows people to learn and practice programming concepts in their native language, which can make it easier for beginners to grasp the fundamentals of coding.

It offers an easy-to-learn approach to programming, enabling users to implement complex programs without the need to navigate through jargon or complicated syntax found in languages like C++.

1C: Enterprise

Founded by Boris Nuraliev in Moscow, Russia, in 1991, 1C Company is a software developer, distributor, and publisher. In 1992, the company released “1C:Accounting,” a bookkeeping software that gained immense popularity due to its simplicity, extensive customisation options.

As a result, 1C:Accounting became the most widely used accounting program in Russia and the former USSR states. Headquartered in Moscow, 1C Company is involved in the development, manufacturing, licensing, support, and sale of computer software, related services, and video games. 1C:Enterprise offers a low-code approach with ready-to-use infrastructure and visual editing tools. It follows a domain-driven design methodology and incorporates a high-level object-oriented language.

It holds a significant share of the Russian enterprise software market and has expanded its presence to countries like the US, Germany, Romania, Poland, Italy, Spain, and Vietnam. The platform supports various database management systems, includes pre-configured building blocks, and is available in multiple languages such as Russian, English, and Chinese.

Citrine

​​Citrine is a programming language that places a strong emphasis on localisation as its core feature. It is designed to be translatable into every written human language, allowing developers to write code in their preferred language. For example, the West Frisian version of Citrine is known as Citrine/FY.

One of the key aspects of Citrine’s localisation is the translation of keywords, numbers, and punctuation into the target language. This means that developers can write code using keywords that are familiar and meaningful in their own language. Additionally, numbers and punctuation marks are also localised to match the conventions of the target language.

Citrine’s commitment to localisation is extensive, as it aims to support all natural human languages, not just well-known ones. By providing extensive language support, Citrine allows developers from diverse linguistic backgrounds to write code in their native language, making programming more accessible and inclusive. This approach acknowledges the importance of language and culture in programming and seeks to bridge the language barrier for developers around the world.

The post 6 Popular Non-English Programming Languages appeared first on Analytics India Magazine.

Google’s AR Software Leader Quits Over Company’s Unstable Commitment 

Former head of operating systems on Google‘s augmented reality team, Mark Lucovsky, has departed from the company. In a recent tweet, Lucovsky cited “changes in AR leadership” and concerns about Google’s commitment and vision in the field as factors influencing his decision to leave.

Time to come home? 🙂

— Eric Horvitz (@erichorvitz) July 11, 2023

Lucovsky joined Google in 2021 to take charge of the OS team focused on augmented reality (AR) technology. Before his tenure at Facebook and Google, Lucovsky held positions at Microsoft and VMware. He gained recognition for his involvement in the design and development of the Windows NT operating system, which served as the foundation for all subsequent Windows releases after Windows XP.

“Moving forward, I am eager to explore opportunities that allow me to further advance Augmented Reality technology and its intersection with generative AI,” Lucovsky wrote. “I approach the next chapter with enthusiasm and anticipation for the exciting possibilities that lie ahead,” he added.

Lucocsky hinted at joining Microsoft again by replying “May be” to Microsoft’s Chief Scientific Officer, Eric Horvitz’s tweet “Time Time to come home? :-)”.

His departure comes at a time when recently Google has scrapped its latest augmented reality (AR) headset ‘Project Iris’, according to a report by Insider. The publication reported that the project was shelved earlier this year following layoffs, reshuffles, and the departure of Google’s AR/VR chief Clay Bavor. However, Google is yet to confirm or deny whether Project Iris has been shelved.

The post Google’s AR Software Leader Quits Over Company’s Unstable Commitment appeared first on Analytics India Magazine.

Data Science Hiring Process at Mastercard

Financial services giant Mastercard aims to connect and power an inclusive, digital economy that benefits everyone, everywhere by making transactions safe, simple, smart and accessible since its birth in 1966. Headquartered in Purchase, NY, it offers an extensive array of cutting-edge financial services. Mastercard was formulated by a coalition of various financial institutions and localised bankcard unions as a countermeasure to the BankAmericard introduced by Bank of America, subsequently evolving into Visa, its primary rival even to this day.

“We are dedicated to enhancing the security of the payments industry by using data science,” said Rajesh Chopra, Senior Vice President, Data & Services, South Asia, Mastercard, told AIM in an exclusive interview.

Mastercard’s expertise in this area provides great value for its stakeholders. The global payments company is also introducing an innovative AI-based solution that enhances banks’ ability to detect potential fraudulent money transfers by their customers. Several prominent UK banks, including Lloyds Banking Group Plc, Natwest Group Plc, and Bank of Scotland Plc, have partnered with Mastercard to utilise the Consumer Fraud Risk system.

Inside Mastercard’s AI & Analytics Play

Mastercard leverages AI to prevent cyberattacks and fraudulent activities, saving more than $30 billion in the last two years alone. Additionally, Mastercard employs Decision Intelligence to address real-time business requirements while avoiding disruptions.

By analysing multiple data points and implementing advanced modelling techniques, transaction decision score is generated, offering valuable insights to card issuers. By leveraging this score, issuers can optimise authorisation decisions, approving legitimate transactions while maintaining robust security measures.

Additionally, Mastercard employs AI to support banking partners in effectively managing credit risk. This ensures that customers are provided with appropriate credit amounts, enhancing the overall customer experience.

Mastercard also banks on data analysis expertise to empower decision-making, exemplified by the introduction of Recovery Insights during the COVID-19 pandemic. This comprehensive solution aids businesses and governments in managing health, safety, and economic risks through analytics, experimentation, consulting, and unique data-driven insights.

The company also prioritises technological advancements, such as implementing robotic process automation to streamline operations and enhance employee experiences. Additionally, Mastercard’s AI-powered global learning platform, Unlocked, provides employees with access to relevant mentorship opportunities and global projects, fostering continuous professional growth.

“At Mastercard, upholding data protection and privacy as fundamental human rights is of paramount importance so we employ a dedicated team of data scientists and AI technologists to ensure consistent adherence to responsible AI development and application,” Chopra emphasised.

With a strong governance process and adherence to best practices, they strive to minimise biases in AI models and data, promoting fair and equitable outcomes.

Hiring Process

Mastercard employs a fair and flexible system to hire talent from various regions. Candidates go through screening based on required skills, followed by technical assessments and interviews to evaluate their behavior, expertise, and domain knowledge.

Currently, Mastercard is looking to hire skilled analysts, consultants, senior analysts, assistant managing consultants, and leadership positions such as managing consultants and senior managing consultants in advanced analytics. The desired expertise profiles for these roles include credit risk, fraud analytics, and other subject matter expert (SME) domains.

For consultant roles in analytics, a minimum of two years of experience is required. Preferred domain knowledge includes finance and retail analytics, payments, fintech, marketing, and commercialisation of scalable platforms. Senior analytical roles, on the other hand, consider experience ranging between five and seven years.

The key skills Mastercard looks for are passion, analytical excellence, project management, good communication, teamwork, integrity and bringing a diverse perspective.

On the technical side, when considering potential candidates, proficiency in various tools, applications, and frameworks is important.

These include platforms and database environments such as Hadoop, Oracle, and MySQL Server. Programming skills in Python, Pyspark, R, Rshiny, Spark, Impala, Hive, and SQL are also desired. Knowledge of BI tools like Tableau, Power BI, Alteryx, MSBI stack, Angoss, Think cell, Bitbucket, Adobe Analytics, and Toad is beneficial. Additionally, familiarity with data science techniques like custom analytics using AI/ML, deep learning, statistical modelling, model development, market research, H2O, and Test & Learn is valuable.

In terms of education and certifications, an advanced degree in Economics, Statistics, Mathematics, Finance, or Engineering with a focus on business applications is preferred. Certifications in the mentioned tools and skills are also beneficial. Having knowledge of business analytics, as well as certifications such as PMP and Scrum Master, are considered advantageous for these roles.

“Alongside technical proficiency, we highly value qualities like empathy, a desire to learn, willingness to collaborate, taking responsibility, adaptability, and a commitment to making a positive impact, not just achieving personal success,” said Chopra.

Work Culture

Mastercard is certified as a “Great Place to Work At” and the company considers people as “biggest assets”.

With a firm belief in fostering a culture of “decency,” Mastercard recognises that a strong workplace culture not only supports employee growth but also positions itself as a positive contributor to the community.

Three key areas contribute to cultivating this culture. Firstly, Mastercard places great emphasis on work flexibility, ensuring inclusivity and well-being among its workforce. Flexible time policies, coupled with a supportive culture, values, diversity, inclusion, career opportunities, and rewards, are all factors contributing to this recognition.

Secondly, employee health and well-being are important for Mastercard. Through its “Live Well” global digital program, Mastercard offers online sessions on healthy eating, exercise, meditation, and expert advice, supporting employees in maintaining their mental and physical well-being. Additional support is provided through benefits like childcare, bereavement leave, flexible scheduling, counseling, and fitness facilities.

Lastly, Mastercard invests significantly in upskilling its workforce, with a focus on project-based learning and mentoring.

“We are driven by a culture of Diversity, Equity, and Inclusion (DEI) that arises from our wider objective of empowering people, preserving the planet, and fostering prosperity. We aim to cultivate a culture that celebrates diversity of thought, background, experiences, and abilities,” Chopra concluded.

So if you are looking for a change in job role, tap on this link.

The post Data Science Hiring Process at Mastercard appeared first on Analytics India Magazine.

Meta Now Knows What You Think

Meta now knows whats on your mind

Does Meta really care about user privacy? With all its platforms eavesdropping on our conversations, we’d say not so much. In 2019, Facebook even admitted to listening in and monitoring private conversations with the help of transcribing services that were available to the users on Messenger.

Moreover, if we take the case of Meta and Cambridge Analytica, Meta has been guilty of manipulating elections by stealing user’s personal data and selling it to political consultants, which ultimately sold it for political advertising. It is very hard to trust Meta with any information now.

With the recent release of its new app called Threads, an alternative to Twitter, Meta has taken these concerns to another level. Instead of just messages or audios, the company is tapping into our thoughts! Twitter, essentially, is about posting your thoughts and following like-minded people. This in turn gives away a lot about the personality, beliefs and their opinions on an array of things, like products, people and politics, etc. Now, the same is the case with Threads. But with Meta and its past record, it is even more difficult to trust the company than Twitter.

Stop. Don't even load Threads.
Don't fall for this data collecting, AI scraping, privacy invading crap again.
Stop letting them harvest all your data and track your every move.
How many times has Facebook/Meta gotten caught collecting your data and then apologizing and then… pic.twitter.com/5LbSkcdiMl

— Grummz (@Grummz) July 6, 2023

Data is all you need

The implications of this invasive technology are far-reaching. While some users may appreciate the personalised experience and convenience offered by Meta Threads, others are deeply troubled by the idea that a company can delve into the inner workings of their minds. Concerns about privacy and the potential for manipulation is abound, as Meta gains unprecedented insight into individuals’ thoughts, emotions, and even political leanings.

As Meta refines its technology and expands its reach, the potential applications become even more intriguing. The data gathered from Meta Threads could serve as a goldmine for developing advanced chatbots based on Language Models (LLMs) like the second version of LLaMa. According to Zuckerberg in a Lex Fridman podcast, the company is already planning to leverage data from all its services in the next iteration of its technology.

This puts into perspective the plan to milk data for Meta through Threads. As soon as a user jumps onto the platform, Meta will be smartly ‘reading’ your thoughts. If we consider the metaverse, after listening to conversations and reading our thoughts, the next big thing that Meta could do with the metaverse is track what we see. In the end, the same data would be used for advertisers to curate “personalised” ads for every user, if not political manipulation.

Mark Zuckerberg says he’s not thinking about monetization of users on his Twitter clone, Threads, but that’s a lie. His business model is selling our data to advertisers. Threads has near zero privacy. It knows our location, "Health & Fitness," "Financial Info," "Sensitive Info” pic.twitter.com/BusczYqaIG

— Michael Shellenberger (@shellenberger) July 6, 2023

This is one of the many reasons Twitter has always been considered as a data gold mine. One might think that Twitter is just a text-based social media platform, but one forgets that it hosts the discourses between people about every topic in the world. This is what Meta picked up from Twitter and built Threads – now telling brands what you think!

Furthermore, Meta’s new Generative AI Sandbox tool is also curated for giving advertisers the ability to track user behaviour on its platforms to personalise ads and target specific audiences.

Threads is just another advertising platform

It is clear that ads have been one of the biggest sources of revenue for Meta. Surprisingly nothing changes with Threads. The privacy policy of the newly launched app has been a concern for people all this while.

Even though Threads does not host ads as of now, Zuckerberg’s approach towards developing the app seems to be hinting towards just giving advertisers the ability to showcase customised ads harnessing Meta’s improved algorithms. Threads has been touted as one of the best pages for companies to post their content by Zuckerberg himself, this would just drive more advertisers on the platform.

Meta has been one of the biggest proponents of self-supervised learning, where using minimal data, it is able to teach machines human behaviour. This, in turn, has helped the company develop sophisticated algorithms capable of analysing users’ thoughts and predicting their preferences with astonishing accuracy. Thus, Meta would keep the data for itself to build its own generative AI tools, while giving the services for advertisers.

This newfound ability raises serious concerns about user privacy and the potential invasion of their innermost thoughts. Users unwittingly provide Meta with a direct window into their minds. Every post, like, and comment on the app is carefully scrutinised and analysed by Meta’s advanced machine learning systems.

So even though Meta is not directly reading our minds, Threads is giving it a window into our minds. Meta has to prove its dedication towards users’ privacy now more than ever if it wants users to continue using Threads and not harvest their data.

The post Meta Now Knows What You Think appeared first on Analytics India Magazine.

6 Innovative AI Models for Weather Forecasting

Imagine a future where every possible weather catastrophe that rocks our world today can be averted. What if major weather changes and natural calamities can be predicted beforehand enabling us to dodge all the damage and destruction in its wake? While 100% predictive modelling for weather doesn’t exist today, companies are increasingly working towards models that might reach that level of accuracy.

With traditional forecasting methods having its limitations, predictive models that adopt AI for forecasting weather come with better interpretability and scalability. Here is a list of companies that are already in this space.

Tomorrow.io

Previously known as ClimaCell, Tomorrow.io uses AI and machine learning techniques for weather prediction. Their AI model ‘Gale’ analyses huge amounts of data, including radar, satellite imagery, atmospheric data, and other non-traditional sources to generate weather forecasts.

The company provides highly localised and precise weather information for various industries such as transportation, logistics, energy, and agriculture which helps these businesses to make informed decisions and optimise their operations based on reliable weather forecasts. The company has even launched the first predictive weather forecasting plugin in ChatGPT.

Atmo.io

Atmo.io builds hardware-software systems that solve weather predictions for cities, nations, defense organisations and businesses. Atmo combines Deep Neural Networks (DNN) and Numerical Weather Prediction (NWP) methods to create an innovative framework for weather forecasting. By harnessing the latest advancements in GPUs, the integration between NWP and DNN enhances Atmo’s forecast horizon and precision of spatial and temporal resolutions.

Atmo focuses on empowering the government by providing weather forecasting that can help them plan well ahead of disasters- reducing losses and damages in the process.

We're introducing the first AI-based live global weather forecast.
Available to everyone at https://t.co/rVEsOxb6TK pic.twitter.com/jLHIb6aYPm

— Atmo (@atmo_ai) May 23, 2023

IBM- The Weather Company

IBM’s The Weather Company was named by ForecastWatch as the weather forecast provider with the highest likelihood of delivering the most accurate forecasts across all regions and time periods analysed. The model uses AI to combine information from almost 100 weather forecast models worldwide. The engine considers factors such as location, time, weather conditions and accuracy of recent forecasts for each model.

Jua.ai

Zurich-based company Jua, which launched an AI model for predicting weather, raised €2.5 million last year. Considered an advanced weather model, Jua uses deep neural network learning to deliver precise and accurate global weather forecasts.

In contrast to conventional weather models that combine regional models, the Jua model incorporates millions of data sources and provides forecasts at a high spatial resolution of 1km2. This helps the model achieve exceptional accuracy for predicting weather conditions up to 48 hours in advance. It also surpasses the capabilities of leading numerical models which provide accurate results for only up to 12 hours.

Pangu Weather

Huawei recently unveiled their latest AI model Pangu-Weather that is said to revolutionise the prediction of weekly weather patterns on a global scale. The model utilises deep learning techniques along with 43 years of historical data. The model’s prediction speed is 10,000 times faster than traditional methods which has the potential to reduce global weather prediction time to seconds.

NVIDIA Earth 2

NVIDIA Earth-2 is a comprehensible and accessible platform that speeds up climate and weather predictions through interactive simulations with high-resolution capabilities. It provides high-resolution climate and weather simulations with interactive visualisation.

The accelerated systems of Earth-2 can help climate scientists to generate climate simulations at a kilometer-scale resolution, perform extensive AI training and interface, and achieve real-time responsiveness with minimal delays.

The post 6 Innovative AI Models for Weather Forecasting appeared first on Analytics India Magazine.

Should I Use Generative AI for Hiring?

A hiring interview.
Image: ijeab/Adobe Stock

Generative artificial intelligence touches many aspects of hiring today, from writing job descriptions to filtering applicants. Some chatbots and keyword scanning tools, which have been part of the hiring process for years, can now add generative AI to their tool kits. Conversation is ongoing about government regulation of using AI when choosing who to employ; in particular, New York City, California and Illinois are proposing or initiating regulations about this topic.

Hiring managers and HR departments may need to consider how generative AI could impact bias and equality in their hiring processes and which product would be best to use. Whether you should use generative AI for hiring depends on a combination of factors.

Jump to:

  • Can I use generative AI for hiring?
  • How generative AI impacts bias and equality
  • What do recruiters and hiring managers think about AI for hiring?

Can I use generative AI for hiring?

It is possible to use generative AI in the hiring process, and many companies do. In a February 2023 survey from ResumeBuilder.com, 77% of the 1,000 surveyed companies said ChatGPT helps them write job descriptions, 66% use the AI chatbot to draft interview requisitions and 65% use it to respond to applicants.

Chad Sowash, former recruiter and co-host of the HR industry podcast Chad and Cheese, noted that “Companies are throwing job descriptions into ChatGPT and trying to get something that sounds more human. Which is funny! … They’re trying to use ChatGPT to soften it up.”

SEE: Another survey found many Americans do not want AI involved in hiring.

In addition, generative AI can quickly sort through text in order to help handle large volumes of resumes.

“It can help you summarize or analyze large quantities of text if people submit writing samples or some sort of large work product,” said Beth Noveck, director of the Burnes Center for Social Change at Northeastern University and the GovLab, in an interview with TechRepublic. “I think these [generative AI] can make it easier for employers to analyze large amounts of content.”

AI could also be used to flag when a candidate may not be right for the role they applied for but could fit in a different open position at the same company.

How generative AI impacts bias and equality

Hiring managers need to be aware that generative AI can introduce bias, and that the AI’s actions need to be auditable. A high-profile example of this was Amazon, which reduced the use of its AI hiring program in 2018 due to its bias against women. In 2022, Vox acquired documents alleging Amazon used a tool called Automated Applicant Evaluation to perform some recruiters’ tasks.

“Companies should look at quarterly cadences to be sure an Amazon situation doesn’t happen to them,” Sowash said.

Noveck anticipates AI could be trained to reduce bias, such as scanning for inappropriate communication, reporting harassment or removing subtle gender bias in job ads. Some services now offer generative AI training that could help equalize access to education, she said; one example is Khan Academy’s AI tool.

“I believe AI could be much, much less biased than what we’ve had as humans over hundreds of years just through ensuring that our vendors are above board, and that we’re doing the audits with a normal cadence,” Sowash said.

SEE: Skills-first hiring aims to make staffing decisions based on the talent someone actually possesses, not their job title. (TechRepublic)

What do recruiters and hiring managers think about AI for hiring?

The hiring managers we spoke to embraced the use of generative AI in resumes — as long as the information presented is accurate. Noveck cautioned that there are two main dangers when it comes to using AI for hiring: bias and a lack of insight into the decision-making process.

“I could use it to write code to help me screen resumes, [but] I want to be sure as with any tool that I really understand what it’s doing to get the output,” she said. “The danger with these tools is we don’t know how it makes its decision.”

She expects generative AI in hiring to become more accessible for job seekers and hiring managers.

“I think you’re going to see a lot of new products coming out … Even if it’s basically just a brand name and a wrapper stuck around what is essentially ChatGPT, we’re going to see people training specific models that are, for example, designed to help you with your interview process.”

Overall, Sowash said, “The thing that’s incredibly important for all the companies out there is to understand that just as all ATS are not created equal, all these [generative AI] products are not created equal.”

Sowash noted that parsing and contextualizing systems have been used in hiring for years. Textkernel, for example, offers chatbots and staffing automation that include modern generative AI but that also build on tech that has been used to scan resumes for decades.

Other hiring software companies, like Paradox and Talkpush, also offer conversational AI for staffing.

Sowash described a “black hole” into which resumes can fall when recruiters have too many applicants. He said that AI might be able to solve that problem.

“When you add gen AI into this and take away the ‘adminis-trivia’ [trivial administrative work] from a recruiter, you give time back to the recruiter, and they can provide white glove human interactions,” he said.

“I do believe this is a great opportunity for our [recruiting] industry to be more human,” he said.

Subscribe to the Innovation Insider Newsletter

Catch up on the latest tech innovations that are changing the world, including IoT, 5G, the latest about phones, security, smart cities, AI, robotics, and more.

Delivered Tuesdays and Fridays Sign up today

Google Takes AI Healthcare in Its ‘Med PaLM’ 

When will the real use case of AI come? It’s a million dollar question. People actually want to know if AI has attained so much intelligence, why it’s still far from offering any concrete solution in the field of medical science. While OpenAI is still focused on ChatGPT, Google is attempting a medical chatbot, something people actually want.

Google is currently conducting tests on an advanced artificial intelligence program that specializes in answering medical questions.

According to a recent report by The Wall Street Journal (WSJ), Google has been testing an AI tool called Med-PaLM 2, a medical chatbot that safely answers medical questions. It’s testing the product at renowned healthcare institutions like the Mayo Clinic research hospital.

Med-PaLM 2 is designed to harness the power of Google’s LLMs, aligned to the medical domain to more accurately and safely answer medical questions and is based on Google’s language model called PaLM 2.

According to the Google’s blog published in April, Med-PaLM 2 was the first LLM to perform at an “expert” test-taker level performance on the MedQA dataset of US Medical Licensing Examination (USMLE)-style questions, reaching 85%+ accuracy, and it was the first AI system to reach a passing score on the MedMCQA dataset comprising Indian AIIMS and NEET medical examination questions, scoring 72.3%.

Is passing the Medical exam enough?

Passing an exam does not necessarily make you a good doctor. It is just a criteria for the society to know that you have basic knowledge of your field and patients can trust you. In 2022, 91% of the candidates cleared the step 1 of USMLE exam according to the official data. Does it make all of them good doctors?

The expertise of doctors comes from the real time scenarios depending on patient to patient. Patients differ in their unique characteristics, and the process of prescribing medication cannot be generalized. Each patient’s body functions differently, and individual factors such as allergies must be taken into account. Google, being aware of this complexity, acknowledges the importance of personalized medical care.

The research paperTowards Expert-LevelMedical Question Answering with Large Language Models published by Google and DeepMind accepts the limitation of Med PaLM 2. The paper said “We note that our results cannot be considered generalizable to every medical question-answering setting and audience.”

The Med PaLM 2 is trained on multiple-choice and long-form medical question-answering datasets from MultiMedQA excluding patient’s personal data following ethical norms.

However, having a patient’s personal data will improve its efficiency to a whole new level but it is very much likely that patients will not be comfortable in sharing their health information as it is personal. Furthermore, Google executives confirmed customers testing Med-PaLM 2 would retain control of their data in encrypted settings inaccessible to the tech company, and the program wouldn’t ingest any of that data.

Should we ignore Healthcare LLMs?

Despite having limitations, use cases of Health care LLMs cannot be ignored. They just need to be handled properly as it is a matter of life and death.

This is no hidden fact that the medical field makes a lot of advancements with each passing day and it is tough for the doctors to be at the top of it.

Apart from Google, Microsoft is also making strides towards Healthcare LLM. The tech giant, which is also the biggest investor in OpenAI, in April, teamed up with the health software company Epic to build tools that can automatically draft messages to patients using the algorithms behind ChatGPT.

“Medical knowledge doubles every 73 days,” said Junaid Bajwa, chief medical scientist at Microsoft, in an exclusive interaction with Analytics India Magazine, at Global AI Summit, Riyadh. He said it is truly about the richness of information – the data coming from publications on medical research, particularly related to ailments and treatments for various diseases and medical conditions across the globe. Looking at the publishing rate of research papers, he estimates that it has the potential to double every three days in a few years.

This is where medical language models (LLMs) become valuable. Imagine a scenario where a doctor misses a critical approach during a medical emergency because they were unaware of an alternative treatment method. In such situations, LLMs can provide significant assistance as they possess a wealth of textbook knowledge. By accessing this extensive knowledge base, LLMs can help doctors by offering alternative approaches and ensuring that vital information is readily available when needed most.

Healthcare LLMs can assist doctors in having informative discussions, answering complex medical questions, and finding important information in difficult medical texts. With the doctor’s expertise and LLMs inputs , together they can make a formidable team.

Google told employees in April that an AI model trusted as a medical assistant could “be of tremendous value in countries that have more limited access to doctors,” according to an internal email reviewed by The Wall Street Journal that quotes a researcher working on the project.

The post Google Takes AI Healthcare in Its ‘Med PaLM’ appeared first on Analytics India Magazine.

Big Techs Don’t Care About Lawsuits

Works of artists used to create generative AI models have been heavily criticized for being ‘encyclopedic thieves’. Since text-to-anything models have spread like wildfire, several artists have sued the founding companies for theft and using their creative property without consent. The latest addition to the list is American author and comedian Sarah Silverman.

The Seinfeld star has filed copyright infringement lawsuits against Meta Platforms and OpenAI for allegedly using their content without permission to train artificial intelligence language models along with two other authors.

Not just artists but also a prominent law firm based in California filed a 157-page lawsuit against OpenAI for violating privacy laws by secretly scraping 300 billion words from the internet, tapping “books, articles, websites and posts — including personal information obtained without consent.”

Since the genesis of the flagship transformer based product ChatGPT, all eyes have been on the data scraped from all over the world wide web to train the models. Prior to that, last year, Midjourney and other text-to-image models were getting slapped with similar lawsuits.

In response, Stable Diffusion and Midjourney have asked a U.S. federal court to dismiss a group of artists’ class-action lawsuit against them — arguing that that the AI-created pictures were not comparable to their work and that the AI-generated images were not similar to the artists’ work and that the lawsuit did not note specify which exact work was misused.

Getting away with hefty fines

With the rising number of data theft accusations, it looks like Meta, OpenAI, and others don’t care.

As the heap of cases has been piling up since one can remember, the companies mostly get away with a monetary penalty or using ‘Terms & Conditions’ as an excuse to dodge the bullet. When Apple incurred five consecutive fines from the Dutch competition regulator in 2022, it surfaced a debate over whether financial penalties have any impact on big tech’s dominance of the digital economy.

Meta has been paying fines since it was Facebook. The Cambridge Analytica fiasco generated headlines when the Zuck run company got away with a $5 billion fine. However, the fine was not appeased by all the FTC members. Two of the five member commissions called the fine insufficient and said it would do little to change the company’s behavior. Rather than accepting this settlement, commissioner Rebecca Kelly Slaughter believed the commision should have initiated litigation against Facebook and its CEO Mark Zuckerberg.

Commissioner Rohit Chopra dissented by stating “The settlement imposes no meaningful changes to the company’s structure or financial incentives, which led to these violations. Nor does it include any restrictions on the company’s mass surveillance or advertising tactics.”

Since 2015, tech companies including Google, Apple, Meta, Apple, and Amazon have collectively received penalties of over $30 billion. Fines are not just a ‘cost of doing business’ for tech giants, the president of the French Competition Authority Isabelle da Silva stated publicly. “Fines are an element of the identification of what is wrong in the conduct.”

Even investment analysts agree that stock markets view investigations into big tech as a contained risk, which likely results in fines rather than business model changes.

Better regulation is an emergency

While policy makers have been demanding change for nearly a decade now, the regulators have responded with drafts under work in progress including the Digital Personal Data Protection Bill which recently got approved in India. Earlier this year in April, even the European Union implemented the General Data Protection Regulation (GDPR) giving the end consumer the right to control their data.

There is still no formula for punishing the companies like OpenAI for wrongdoings. With the rising number of lawsuits against the Silicon Valley’s darling OpenAI and the rest, users can expect the regulators to figure out a way for tech corporations to ethically fix themselves.

The post Big Techs Don’t Care About Lawsuits appeared first on Analytics India Magazine.

Generative AI could add up to $4.4 trillion annually to global economy

generative AI concept

Artificial intelligence (AI) can play a crucial role in assisting leaders and their teams in making strategic, as well as immediate, data-driven decisions and taking effective action. Research has shown that generative AI adoption in marketing reveals promising productivity gains with marketers estimating generative AI can save them the equivalent of over a month per year, making room for more meaningful work. Some estimates are forecasting that AI has the potential to automate 40% of the average workday

Also: Generative AI is coming for your job. Here are 4 reasons to get excited

The latest report from McKinsey on the economic potential impact of generative AI points to what may be the next productivity frontier. The report studied 16 business functions, examining 63 use cases in which the technology can address specific business challenges in ways that produce one or more measurable outcomes.

Here are some of the key forecasts on the impact of generative AI from the McKinsey report:

  • McKinsey's latest research estimates that generative AI could add the equivalent of $2.6 trillion to $4.4 trillion annually across the 63 use cases analyzed by McKinsey — by comparison, the United Kingdom's entire GDP in 2021 was $3.1 trillion. This would increase the impact of all artificial intelligence by 15 to 40%.
  • About 75% of the value that generative AI use cases could deliver falls across four areas: Customer operations, marketing and sales, software engineering, and R&D.
  • Generative AI will have a significant impact across all industry sectors. Banking, high tech, and life sciences are among the industries that could see the biggest impact as a percentage of their revenues from generative AI.
  • Generative AI has the potential to change the anatomy of work, augmenting the capabilities of individual workers by automating some of their individual activities. Current generative AI and other technologies have the potential to automate work activities that absorb 60-70% of employees' time today. The acceleration in the potential for technical automation is largely due to generative AI's increased ability to understand natural language, which is required for work activities that account for 25% of total work time.
  • The pace of workforce transformation is likely to accelerate, given increases in the potential for technical automation. Our updated adoption scenarios, including technology development, economic feasibility, and diffusion timelines, lead to estimates that half of today's work activities could be automated between 2030 and 2060, with a midpoint in 2045, or roughly a decade earlier than in our previous estimates.
  • Generative AI can substantially increase labor productivity across the economy, but that will require investments to support workers as they shift work activities or change jobs. Generative AI could enable labor productivity growth of 0.1 to 0.6% annually through 2040, depending on the rate of technology adoption and redeployment of worker time into other activities.
  • The era of generative AI is just beginning. Excitement over this technology is palpable, and early pilots are compelling. But a full realization of the technology's benefits will take time, and leaders in business and society still have considerable challenges to address. These include managing the risks inherent in generative AI, determining what new skills and capabilities the workforce will need, and rethinking core business processes such as retraining and developing new skills.

Value potential of generative AI by business function will vary. McKinsey's analysis of 16 business functions identified just four —customer operations, marketing and sales, software engineering, and research and development — that could account for approximately 75% of the total annual value from generative AI use cases.

Also: 5 ways to explore the use of generative AI at work

One very important takeaway from the McKinsey report is this: "In addition to the potential value generative AI can deliver in specific use cases, the technology could drive value across an entire organization by revolutionizing internal knowledge management systems."

Using generative AI in just a few functions could drive most of the technology's impact across potential corporate use cases.

The impact of generative AI by business functions do vary but the report notes very compelling specific examples: Generative AI could increase sales productivity by 3-5% of current global sales expenditures. Across 63 use cases, generative AI has the potential to generate $2.6 trillion to $4.4 trillion in value across industries.

Generative AI productivity impact by business functions¹

The McKinsey report concludes with forecasting the impact of generative AI on the future of work, noting that over the years, machines have given human workers various "superpowers".

Technical capabilities, level of human performance achievable by technology

For many industries, generative AI, trust, data security and improved digital experiences are key priorities to improve the overall customer experience. To learn more about the potential impact of generative AI on the economy, you can review the robust McKinsey report here.

Artificial Intelligence

These authors are suing OpenAI and Meta for copyright infringement now

NEW YORK, NEW YORK - MAY 05: Sarah Silverman speaks on stage at Variety's 2022 Power Of Women: New York Event Presented By Lifetime at The Glasshouse on May 05, 2022 in New York City. (Photo by Cindy Ord/Getty Images for Variety)

Sarah Silverman speaks on May 05, 2022 in New York City.

Sarah Silverman joined forces with fellow authors Richard Kadfrey and Christopher Golden to sue Meta and OpenAI in dual claims of copyright infringement.

The suits are separate, each against one of the companies, and the authors claim they never consented for their copyrighted books to be used as training material for the large language models used (LLM) behind OpenAI's ChatGPT and Meta's LLaMa.

Also: Generative AI is coming for your job. Here are 4 reasons to get excited

An LLM is a type of artificial intelligence algorithm trained using massive amounts of information from books and texts from the internet to learn language patterns, grammar, and context until it can generate human-like text and have chat interactions with users.

According to the lawsuits, the models "remix the copyrighted works of thousands of book authors — and many others — without consent, compensation, or credit."

Copyright infringement has been one of the many concerns of AI skeptics since ChatGPT became widely available in November, triggering the generative AI boom and questions about how AI will affect the creativity and copyright process.

Also: Who owns the code? If ChatGPT's AI helps write your app, does it still belong to you?

The lawsuits claim the LLMs were trained on illegally-acquired materials, such as those found in "shadow library" websites. According to the OpenAI suit:

"The OpenAI Books2 dataset can be estimated to contain about 294,000 titles. The only 'internet-based books corpora' that have ever offered that much material are notorious 'shadow library' websites like Library Genesis (aka LibGen), Z-Library (aka B-ok), Sci-Hub, and Bibliotik. The books aggregated by these websites have also been available in bulk via torrent systems."

The Meta suit makes similar claims, as it links to the sources where the books' training data was gathered. It divides them in two: The first as being from Project Gutenberg, which is an online archive of books that are out of copyright, and the second is from the "Books3 section of ThePile", which is a dataset available on the popular AI project hosting site, Hugging Face, and appears to represent all of Bibliotik, mentioned above.

Also: Want to build your own AI chatbot? Say hello to open-source HuggingChat

The plaintiffs are represented by lawyers Joseph Savery and Matthew Butterick, who also represent authors Mona Awad and Paul Tremblay in a lawsuit filed in June against OpenAI over copyright infringement.

Artificial Intelligence