Texas Jumps the Gun on AI

A Look at TEA’s AI Grading System with Caleb Stevens and Dan Wilson

Texas Jumps the Gun on AI

The recent implementation of artificial intelligence by the Texas Education Agency (TEA) to grade the State of Texas Assessments of Academic Readiness (STAAR) tests has sparked significant debate. For show #7 of the AI Think Tank podcast, I had the pleasure of discussing this topic with Caleb Stevens, an infrastructure engineer, founder of Personality Labs, and fellow MIT cohort of mine.

Caleb also used to be the manager of Enterprise Systems at National Math and Science, a non profit organization that specializes in helping teachers teach STEM education in high schools. Together we both share backgrounds in education as well as a passion for helping others truly understand what they are learning.

Context and Background

Caleb explained that previously, the state hired around 6,000 human graders annually to handle the STAAR tests. This year, however, they have reduced that number to fewer than 2,000, relying heavily on AI. This drastic reduction has raised eyebrows and questions about the reliability and fairness of AI in such a critical role.

staar_2022

He also highlighted that the AI model used by TEA is said to be trained on just 3,000 human-graded responses. From the outset, this seemed woefully inadequate to both of us. We agreed that such a small dataset couldn’t possibly capture the diversity and complexity needed for accurate grading. That’s the size of a spreadsheet and not big data at all.

We also criticized TEA for not providing sufficient information about the AI’s metrics, training data, or the model used. Without transparency, it’s impossible to assess the AI’s effectiveness and fairness. Caleb noted that “they haven’t released any verifiable metrics, any kind of facts or information on the type of AI they’re using, the models or statistics that they claim to be training on. This lack of transparency is a major red flag.”

Concerns About Bias and Fairness

One of the significant issues Caleb raised was the potential for bias in AI models. Given the limited and potentially non-representative training data, the risk of bias is high. Without knowing the specifics of the AI’s training, it’s hard to gauge the extent of bias present in the system. Caleb mentioned that “we don’t know the training data that was used on this. We don’t know the base model that they’re using this data for, so we don’t know what built-in biases or other things that could be present with this model.”

We both agreed on the need for human oversight to mitigate these risks. Initially, AI should work alongside human graders to ensure accuracy and fairness. Caleb stated that “using artificial intelligence to assist humans in grading would have been a better approach.”

Implications for Teachers and Students

The shift to AI grading has resulted in significant job losses among human graders, which was a major point of concern for us. This reduction not only affects those graders economically but also removes experienced educators from the grading process.

Rebecca Bultsma, an AI Literacy & Ethics Advocate, joined our discussion and highlighted the importance of teachers in the grading process. She noted that grading helps teachers improve their teaching methods and understand student needs better. Rebecca emphasized that “one of the most valuable experiences teachers can have is going and doing marking of the standardized tests, because they’re exposed to a wide range of test answers and things that they might not be exposed to in their own school.”

Rebecca also stressed the need to involve parents in the discussion about AI in education. Parents need to understand how AI impacts their children’s education and have a say in how it is implemented. She said, “if the parents don’t understand how to support at home and the implications, they’re not going to jump on board, they’re going to fight against things like this.”

Historical Context and AI Capabilities

Caleb provided a historical overview of AI, tracing its roots back to the 1950s and explaining how public perception of AI has been shaped by popular culture, which often instills fear and misunderstanding. He remarked, “artificial intelligence isn’t necessarily the Terminator. It’s not out to find Sarah Connor or to take over the world. It’s really, right now, just an assistant to help humanity.”

Despite significant advancements, AI still has limitations, including issues like hallucinations and context loss. These problems are exacerbated when the AI is trained on small datasets. Caleb pointed out that “in short context windows, this level of AI performs fantastically. But 3,000 responses is about the size of one context window for models like GPT-4.”

Financial Implications and Transparency

The TEA’s decision to implement AI grading is partly driven by cost savings. They claim to save up to $20 million, but there is little information on how these savings will be used. Caleb highlighted, “they haven’t said what they’re going to be doing with the money they’re saving. Are they just going to save it? Put it in the bank? Are they going to reinvest this back into schools?”

We both questioned whether the cost savings justify the potential drawbacks, such as job losses and possible inaccuracies in grading. Caleb noted that “they saw that they could save dollar signs and that’s what they went for.”

The lack of transparency about the AI’s operation, data practices, and decision-making processes is a major concern. Without clear information, it is hard to trust the system. Caleb emphasized, “if they would have released model information or something like that, more data and statistics about what they were using, how they were using it, I think would have alleviated a lot of these concerns.”

Pilot Programs and Gradual Implementation

We discussed that a more prudent approach would have been to start with pilot programs in select school districts. This would allow for thorough testing and gradual implementation based on verified results. Caleb agreed, noting that “the way they should have done it was probably run them side by side for the year, right? Aren’t they really doing that?”

By gradually increasing the proportion of AI-graded tests, the TEA could build confidence in the system while continuously refining the model based on real-world performance. Caleb added, “I think I would probably prefer to go the other way or to run two systems side by side for a year.”

Addressing Privacy and Ethical Concerns

Fellow cohort and Cybersecurity Specialist,Yemi Adeniran, raised concerns about the privacy of student data and how it is handled and protected. Ensuring the security and confidentiality of student information is crucial. Yemi asked, “what about the privacy concerns of student data and the handling of protection of students’ data in that regard?”

Caleb acknowledged these concerns and emphasized the need for robust data protection practices, although details on TEA’s current practices are lacking. He mentioned, “we would just have to make positive assumptions here and say that there are laws in the U.S. FERPA is one of them. That’s the Family Educational Rights and Privacy Act.”

Mo Morales, a parent and UX/AI Developer questioned whether students’ creative output is adequately protected, given that their work is being used to train AI models. He asked, “are these students, don’t they enjoy the protections of their own personal, private, creative output like anybody else?”

Texas Jumps the Gun on AI

Comparing AI and Human Graders

Patrick Stingely, a Data Scientist and Security Specialist argued that AI might actually reduce bias compared to human graders, who can be influenced by personal biases and subjective judgments. He mentioned, “the computer actually has a better chance of providing unbiased results. One child’s content is going to be the same as another.”

Caleb countered that AI might not understand regional slang or cultural references, leading to unfair grading. He pointed out, “it could show bias because it just doesn’t understand the culture from which the writing is coming from.”

Both sides agreed that a balanced approach, where AI assists human graders rather than replacing them entirely, would be more effective. Caleb stated, “AI can and will assist us in that area. I believe that that probably would have been the better approach to begin with.”

Larry Sawyer, a Principal User Experience Designer with AI specialties made some solid points as well. “Something that Yemi mentioned made me think of this. I mean, it’s been a long time since I was in school, but especially with, like, math and science problems, you have to show your work. Like, you don’t get credit if you can’t show how you got to that solution, right?”

“Why is an educational institution not showing their work? Isn’t that part of their mandate for students?”

Larry followed that with more wisdom stating, “They’ve gone through and graded this stuff. I can see if this was like a private company where they would want to keep things secure and private, you know, from an intellectual property standpoint and other factors. But this is a public school system. Why the secrecy? Why isn’t there more transparency?”

For his final point and something we all agreed on, Larry added, “They have data. Like when some of the biggest challenges that I’ve seen with AI is finding a complete data set for you to be able to utilize and do your testing and your evaluation. Now, I would imagine they have years’ worth of tests and years’ worth of, you know, grades and things that they could go through and and correspond, you know, grade those things at 75-25 and compare it to the grades from the previous years. That’s how you do your testing. You don’t do testing on live data in a live environment. I mean, I don’t understand why they would do that.”

Parental Concerns and Choice

Rebecca emphasized the importance of involving parents in discussions about AI in education. Parents need to understand the implications and have a voice in decisions that affect their children. She said, “if the parents don’t understand how to support at home and the implications, they’re not going to jump on board, they’re going to fight against things like this.”

Parents should have the option to opt-out of AI grading if they are uncomfortable with it. TEA’s current policy requires parents to pay $50 for human grading, which Caleb criticized as unfair. He mentioned, “if you want the luxury of a human grading your paper, you’re going to have to pay $50 a child to do it.”

Learning from Global Perspectives

Rebecca shared insights from Canada, where the approach to standardized testing and funding differs significantly from Texas. Underperforming schools in Canada receive additional support rather than being penalized. She noted, “they’re not punished for underperforming, there is standardized testing. But I don’t think it at all impacts how much money schools get.”

Involving teachers in the grading process is seen as valuable professional development. Teachers benefit from exposure to a wide range of student responses. Rebecca emphasized, “one of the most valuable experiences teachers can have is going and doing marking of the standardized tests, because they’re exposed to a wide range of test answers.”

Conclusion and Recommendations

Several key concerns with TEA’s AI grading implementation stood out, including inadequate training data, lack of transparency, potential bias, job losses, and the impact on education quality. I would love to see working guidelines established for accomplishing safe and accurate testing going forward.

To address these concerns, a more refined and robust approach is recommended:

  1. Expand Training Dataset: Use a minimum of 100,000 human-graded tests to train the AI, ensuring diverse and comprehensive data.
  2. Diverse Sampling: Include data from various demographics and regions to minimize bias.
  3. Human-AI Collaboration: Initially use AI alongside human graders with continuous feedback loops.
  4. Robust Testing and Validation: Employ cross-validation and regular bias checks.
  5. Transparency and Peer Review: Make data and methodologies publicly available for review and engage independent third parties for audits.
  6. Incremental Rollout: Start with pilot programs and gradually increase AI grading.
  7. Regular Updates and Retraining: Continuously update the model with new data and retrain to adapt to changes.
  8. Involve Parents and Teachers: Ensure parents have a voice and provide options to opt-out of AI grading if desired.
  9. Clear Communication: Be transparent about the use of savings and how they will benefit the educational system.

Our discussion underscored the importance of addressing the heartfelt concerns of all stakeholders, including teachers, parents, and students. It is crucial to ensure that the implementation of AI in education enhances the learning experience without compromising fairness, transparency, or quality.

By taking a cautious, well-structured approach, the TEA can build a reliable and effective AI grading system that benefits everyone while maintaining trust and accountability. We hope that with this information, you have more insight and make the right decisions should this impact you, your family, and your community.

Join us as we continue to explore the cutting-edge of AI and data science with leading experts in the field.

Subscribe to the AI Think Tank Podcast on YouTube.
Would you like to join the show as a live attendee and interact with guests? Contact Us

Why India Needs More AI4Bharats

Why India Needs More AI4Bharats

When a country wants to build AI, one of the most important ingredients is the need for a robust industry-academia partnership. The US and European countries have already identified it, and China too has also been capitalising on it. But when it comes to India, there is a visible lack of such initiatives.

One prominent initiative taking wings in India is AI4Bharat, which started as a collaboration between IIT Madras and Nandan Nilkeni’s ekStep foundation. It is sponsored by Bhashini, Microsoft, Google, and NVIDIA and its contribution to the Indic open source AI community has been tremendous. But the problem is, it is the only prominent one in the country so far.

In a podcast with AIM, Adarsh Shirawalmath, the founder of Tensoic, and the creator of Kannada Llama, said that he is very inspired by the work that AI4Bharat is doing in the country. “We need more AI4Bharats in the country,” he said.

The Indian academia narrative

One of the biggest challenges when it comes to building AI models in India, apart from compute, is the rate of adoption within the industry. “We need to do what China is doing,” said Shirawalmath, while explaining that China’s government funds the projects in the country, India can leverage the industrial partnerships to compete in the race.

BharatGPT is another similar initiative which was started by IIT Bombay, and is now being built in partnership with the Department of Science and Technology of the government. Its goal is also to make AI for Indic languages taking the open source route.

“Not sure what is happening right now [with BharatGPT], but it would definitely drive India’s AI forward,” said Adithya S Kolavi, the creator of Indic LLM Leaderboard from CognitiveLab.

Kolavi pointed that one of the biggest reasons behind his love for AI4Bharat is that the initiative is bullish on open source. For example, Llama 3’s TikToken optimiser is not compatible with Indic languages. To make up for this, AI4Bharat’s IndicTrans2 tokeniser came out in open source which was very helpful for the community.

Mufeed VH, the creator of Devika also told AIM that he tested on AI4Bharat’s IndicLLMSuite dataset, which is rich enough for making AI models. The dataset contains around 251 billion tokens in 22 languages, which is mostly converted from audio and translated into Indic languages from Wikipedia.

“More data wouldn’t hurt,” Mufeed said. He recently got into Y Combinator for his startup Stition.ai, which is building security focused solutions for AI.

The problems that Mufeed points out is that even though IITs and NIITs are doing the research, they do not get the support they need from the industry. “China is racing against the US. I think the Indian government should do the same,” he added.

Stuck at Research

On the contrary, there has been research that comes out of Indian universities focusing on LLMs, voice models, and using AI in several fields, but most of them get stuck right at the research phase. However, this is slowly changing with several researchers sending their papers to ICML and NeurIPS.

“None of the research from the universities actually comes out. They just do research in the field like a final year project, and it dies there,” said Mufeed, about how researchers should come out of the universities and put their creations into products, or probably build a research lab, such as AI4Bharat.

Kolavi pointed out that there are also not enough grants in India coming from universities or companies for flourishing research. “You have the VC kind of things, but grants are essential to push research forward. I have not seen that concept flourish in India,” added Kolavi.

The Ivy League universities have the infrastructure to facilitate research, but Indian universities do not have that. In a recent post, a student pointed out that there are merely six GPUs in a university in India for doing AI research.

Similar thoughts were shared by Professor V Ramgopal Rao from BITS Pilani on how industry involvement is key, whether it’s to make our students industry ready or reap the benefits of research happening in the country. Industry involvement is needed to convert research into innovation.

Meanwhile, Google and OpenAI have been taking active interest in working on Indic languages. Google spoke at length about its Navrasa model, which is built in partnership with Telugu LLM Labs in India. OpenAI at its Spring Update released GPT-4o, which also includes a huge corpus of Indic language data.

Apart from gathering massive funds, which is definitely a priority to bridge the gap between research and industry, India needs adoption by companies within the country for Indian AI models. We need more initiative out of our universities, and collaborate with the industry to drive India’s AI momentum forward.

The post Why India Needs More AI4Bharats appeared first on AIM.

3 Courses You Should Consider If You Want to Become a Data Analyst

Data Analysts Courses
Image by Author

Day to day, I receive messages from people all over the world regarding how to become a data analyst. Some come from a background where their current career involves data, whilst others are completely new to data analysis in general.

Starting a new career can be difficult, especially when you feel like you do not have the right skill set to be successful. Imposter syndrome. It is normal and we have all gone through it once in our lives.

However, I can completely agree that when you decide to change careers, you want to ensure that you know what you’re doing and you have the right skills to continuously grow and be better. This article will help you choose the right course, without the hassle of looking around yourself.

DataCamp’s Data Analyst in Power BI

Link: DataCamps Data Analyst Course in Power BI

Power BI is one of the most popular tools out there and data analysts love it. If you are looking for a career where you can merge being a data analyst with a business analyst — this course from DataCamp may be the one for you.

Without any prior experience required, you will learn how to import, clean, manipulate, and visualize data in Power BI—all critical skills for any aspiring data professional. You will also go through hands-on exercises that will take you through the data analysis process as well as the best practices with Power BI.

If you want to get certified, you will also receive a 50% discount code for the Microsoft PL-300 certification after completing the track to help you take your data analyst career to the next level.

Meta Data Analyst Certification

Link: Meta Data Analyst Professional Certificate.

This course is aimed at beginners who are looking to enter the tech industry from a data analyst approach. You can take this course in your own time and at your own pace. The whole course will take you 5 months to complete if you commit 10 hours a week, but if you can commit more you can get it done faster!

The certification is made up of 5 courses:

  • Introduction to Data Analytics
  • Data Analysis with Spreadsheets and SQL
  • Python Data Analytics
  • Statistics for Marketing
  • Introduction to Data Management

In these 5 courses, you will learn how to collect, clean, sort, evaluate and visualise data, apply the OSEMN framework used to guide the data analysis process, use statistical analysis, such as hypothesis testing and regression analysis to make data-driven decisions and understand the foundational principles of effective data management within an organisation.

Google's Data Analytics Certification

Link: Google's Data Analytics Professional Certification

A very popular course, Google's Data Analytics professional certification allows you to understand the practices and processes used by associate data analysts. This course is aimed at beginners and is advised to commit 10 hours a week to complete it in 6 months.

You will learn key analytical skills, such as data cleaning, analysis, & visualisation as well as tools such as spreadsheets, SQL, R programming, and Tableau. Using these skills, you will learn how to visualise and present data findings in dashboards, presentations, and commonly used visualisation platforms.

Many have taken this course with no prior experience and are currently working as a data analyst.

Wrapping it Up

It can be a hassle to find the right course, especially when you are new to the sector. The team and KDnuggets want everybody to grow and learn.

We hope articles like these are helping you start the next chapter in your life with guidance and ease!

Nisha Arya is a data scientist, freelance technical writer, and an editor and community manager for KDnuggets. She is particularly interested in providing data science career advice or tutorials and theory-based knowledge around data science. Nisha covers a wide range of topics and wishes to explore the different ways artificial intelligence can benefit the longevity of human life. A keen learner, Nisha seeks to broaden her tech knowledge and writing skills, while helping guide others.

More On This Topic

  • Should You Consider a DataOps Career?
  • The Top AutoML Frameworks You Should Consider in 2023
  • Want to Become a Data Scientist? Part 1: 10 Hard Skills You Need
  • Want to Become a Data Scientist? Part 2: 10 Soft Skills You Need
  • Should You Become a Freelance Artificial Intelligence Engineer?
  • Investing In AI? Here Is What To Consider

AI to Help You Understand This Indian Classical Dance

AI to Help You Understand This Indian Classical Dance

Kathakali, a traditional dance drama art form from Kerala, can now be understood by all, including a foreign audience. Performed based on subject matter from the Ramayana, the Mahabharata, and stories from Shaiva literature, AI can now help understand the stylised gestures and facial expressions of the performers.

A recent research paper by the Indian Institute of Information Technology in Kottayam, in collaboration with the Kerala Kalamandalam, Thrissur, aims to develop an AI-enabled tool capable of providing semantic interpretations of the art form.

The study hopes to leverage advancements in AI, particularly in computer vision and natural language processing, to preserve and interpret cultural heritage, specifically focusing on the Indian classical art form Kathakali.

The goal is to develop a system that can automatically recognise and interpret the gestures, or mudras, used in Kathakali performances. By doing so, the study aims to address the challenges of digitising and understanding this intricate art form, which involves complex hand gestures and facial expressions.

By building a database of vector representations of mudras and comparing them to input data, the system can identify and interpret the gestures being performed. Importantly, the study aims to achieve this with minimal training data, making it adaptable to other dance forms or related areas like sign language recognition.

Overall, the study seeks to contribute to the preservation and understanding of cultural heritage by applying AI techniques to digitally analyse and interpret traditional art forms.

AI and dance are not a new duo. In 2019, renowned choreographer Wayne McGregor unveiled his newest creation, “Living Archive: An AI Performance Experiment,” in a world premiere. McGregor collaborated with Google Arts and Culture Lab to develop an innovative AI choreographic tool, drawing from 25 years of his video archive.

The post AI to Help You Understand This Indian Classical Dance appeared first on AIM.

Essential Python Libraries for Data Manipulation

Essential Python Libraries for Data Manipulation
Image generated with Midjourney

As a data professional, it’s essential to understand how to process your data. In the modern era, it means using programming language to quickly manipulate our data set to achieve our expected results.

Python is the most popular programming language data professionals use, and many libraries are helpful for data manipulation. From a simple vector to parallelization, each use case has a library that could help.

So, what are these Python libraries that are essential for Data Manipulation? Let’s get into it.

1.NumPy

The first library we would discuss is NumPy. NumPy is an open-source library for scientific computing activity. It was developed in 2005 and has been used in many data science cases.

NumPy is a popular library, providing many valuable features in scientific computing activities such as array objects, vector operations, and mathematical functions. Also, many data science use cases rely on a complex table and matrices calculation, so NumPy allows users to simplify the calculation process.

Let’s try NumPy with Python. Many data science platforms, such as Anaconda, have Numpy installed by default. But you can always install them via Pip.

pip install numpy

After the installation, we would create a simple array and perform array operations.

import numpy as np    a = np.array([1, 2, 3])  b = np.array([4, 5, 6])  c = a + b  print(c)

Output: [5 7 9]

We can also perform basic statistics calculations with NumPy.

data = np.array([1, 2, 3, 4, 5, 6, 7])  mean = np.mean(data)  median = np.median(data)  std_dev = np.std(data)    print(f"The data mean:{mean}, median:{median} and standard deviation: {std_dev}")

The data mean:4.0, median:4.0, and standard deviation: 2.0

It’s also possible to perform linear algebra operations such as matrix calculation.

x = np.array([[1, 2], [3, 4]])  y = np.array([[5, 6], [7, 8]])  dot_product = np.dot(x, y)    print(dot_product)

Output:

[[19 22]
[43 50]]

There are so many benefits you can do using NumPy. From handling data to complex calculations, it’s no wonder many libraries have NumPy as their base.

2. Pandas

Pandas is the most popular data manipulation Python library for data professionals. I am sure that many of the data science learning classes would use Pandas as their basis for any subsequent process.

Pandas are famous because they have intuitive APIs yet are versatile, so many data manipulation problems can easily solved using the Pandas library. Pandas allows the user to perform data operations and analyze data from various input formats such as CSV, Excel, SQL databases, or JSON.

Pandas are built on top of NumPy, so NumPy object properties still apply to any Pandas object.

Let’s try on the library. Like NumPy, it’s usually available by default if you are using a Data Science platform such as Anaconda. However, you can follow the Pandas Installation guide if you are unsure.

You can try to initiate the dataset from the NumPy object and get a DataFrame object (Table-like) that shows the top five rows of data with the following code.

import numpy as np  import pandas as pd    np.random.seed(0)  months = pd.date_range(start='2023-01-01', periods=12, freq='M')  sales = np.random.randint(10000, 50000, size=12)  transactions = np.random.randint(50, 200, size=12)    data = {  'Month': months,  'Sales': sales,  'Transactions': transactions  }  df = pd.DataFrame(data)  df.head()

Essential Python Libraries for Data Manipulation

Then you can try several data manipulation activities, such as data selection.

df[df['Transactions'] <100]

It’s possible to do the Data calculation.

total_sales = df['Sales'].sum()   average_transactions = df['Transactions'].mean() 

Performing data cleaning with Pandas is also easy.

df = df.dropna()   df = df.fillna(df.mean()) 

There is so much to do with Pandas for Data Manipulation. Check out Bala Priya article on using Pandas for Data Manipulation to learn further.

3. Polars

Polars is a relatively new data manipulation Python library designed for the swift analysis of large datasets. Polars boast 30x performance gains compared to Pandas in several benchmark tests.

Polars is built on top of the Apache Arrow, so it’s efficient for memory management of the large dataset and allows for parallel processing. It also optimize their data manipulation performance using lazy execution that delays and computational until it’s necessary.

For the Polars installation, you can use the following code.

pip install polars 

Like Pandas, you can initiate the Polars DataFrame with the following code.

import numpy as np  import polars as pl    np.random.seed(0)   employee_ids = np.arange(1, 101)   ages = np.random.randint(20, 60, size=100)   salaries = np.random.randint(30000, 100000, size=100)     df = pl.DataFrame({      'EmployeeID': employee_ids,      'Age': ages,      'Salary': salaries  })    df.head()

Essential Python Libraries for Data Manipulation

However, there are differences in how we use Polars to manipulate data. For example, here is how we select data with Polars.

df.filter(pl.col('Age') > 40)

The API is considerably more complex than Pandas, but it’s helpful if you require fast execution for large datasets. On the other hand, you would not get the benefit if the data size is small.

To know the details, you can refer to Josep Ferrer's article on how different Polars is are compared to Pandas.

4. Vaex

Vaex is similar to Polars as the library is developed specifically for considerable dataset data manipulation. However, there are differences in the way they process the dataset. For example, Vaex utilize memory-mapping techniques, while Polars focus on a multi-threaded approach.

Vaex is optimally suitable for datasets that are way bigger than what Polars intended to use. While Polars is also for extensive dataset manipulation processing, the library is ideally on datasets that still fit into memory size. At the same time, Vaex would be great to use on datasets that exceed the memory.

For the Vaex installation, it’s better to refer to their documentation, as it could break your system if it’s not done correctly.

5. CuPy

CuPy is an open-source library that enables GPU-accelerated computing in Python. It is CuPy that was designed for the NumPy and SciPy replacement if you need to run the calculation within NVIDIA CUDA or AMD ROCm platforms.

This makes CuPy great for applications that require intense numerical computation and need to use GPU acceleration. CuPy could utilize the parallel architecture of GPU and is beneficial for large-scale computations.

To install CuPy, refer to their GitHub repository, as many available versions might or might not suit the platforms you use. For example, below is for the CUDA platform.

pip install cupy-cuda11x

The APIs are similar to NumPy, so you can use CuPy instantly if you are already familiar with NumPy. For example, the code example for CuPy calculation is below.

import cupy as cp  x = cp.arange(10)  y = cp.array([2] * 10)    z = x * y    print(cp.asnumpy(z))

CuPy is the end of an essential Python library if you are continuously working with high-scale computational data.

Conclusion

All the Python libraries we have explored are essential in certain use cases. NumPy and Pandas might be the basics, but libraries like Polars, Vaex, and CuPy would be beneficial in specific environments.

If you have any other library you deem essential, please share them in the comments!

Cornellius Yudha Wijaya is a data science assistant manager and data writer. While working full-time at Allianz Indonesia, he loves to share Python and data tips via social media and writing media. Cornellius writes on a variety of AI and machine learning topics.

More On This Topic

  • 8 Best Python Image Manipulation Tools
  • 10 Pandas One Liners for Data Access, Manipulation, and Management
  • Top 38 Python Libraries for Data Science, Data Visualization &…
  • 5 Must Try Awesome Python Data Visualization Libraries
  • Python Libraries Data Scientists Should Know in 2022
  • Introduction to Python Libraries for Data Cleaning

6 Indian Generative AI Platforms for Recruiters and HR Professionals

5 Indian Generative AI Platforms for Recruiters and HR Professionals

In an ever-evolving landscape of work, 2024 emerges as a year of hope with transformative potential. Amidst this backdrop, India stands tall as the latest ManpowerGroup Employment Outlook Survey suggests that 36% of companies are gearing up for recruitment drives between April and June.

Fueling this momentum are technological advancements, as various platforms turn to AI models to streamline the hiring process. The results?

Enter generative AI platforms where innovation meets necessity. The platforms present recruiters and HR professionals with an array of tools to navigate this season of talent acquisition.

We’re well aware that every business is unique, with its distinct recruitment processes. Thus, here are the latest solutions tailored to make the journey of 2024 recruitment seamless for all.

Machine Hack for Entreprises

Machine Hack Gen AI revolutionises recruitment practices through its AI-driven solutions, optimising every stage of the candidate evaluation process. By leveraging advanced algorithms, recruiters can accurately measure candidates’ skills for in-demand roles.

The platform provides tailored evaluations equipped with essential features such as build tools and interactive previews, ensuring comprehensive assessments aligned with industry requirements.

Moreover, Machine Hack Gen AI prioritises candidate success by creating an environment that mirrors real-world coding scenarios. With intuitive interfaces and essential tools like built-in terminals and native whiteboards, candidates can confidently showcase their abilities. This approach not only enhances candidate experience but also enables recruiters to gauge candidates’ adaptability and proficiency in practical settings.

Furthermore, the platform promotes transparency and integrity through AI-enhanced structured interviews, effectively combating fraudulent practices. By incorporating advanced algorithms, recruiters can assess candidates objectively, free from biases.

Oracle Recruiting

In 2018, Oracle Recruiting was launched as a part of the Oracle Cloud HCM suite. Now, it has integrated AI-driven features, allowing recruitment teams to attract top talent and swiftly fill job openings.

These features seamlessly integrate into existing HR workflows with the assistance of Oracle Cloud Infrastructure’s (OCI) generative AI services.

Deepa Param Singhal – vice president, Cloud Applications, Oracle India shares how AI is transforming the #HR function and bringing a paradigm shift in #employeeexperience, #productivity and #talent supply chain. #CloudWorld https://t.co/BzYWDLZbxH pic.twitter.com/INQkK9yoHb

— Oracle India (@Oracle_India) April 3, 2024

A standout success story is Kotak Mahindra Bank, a user of Oracle’s Fusion HCM. By harnessing OCI alongside Fusion Cloud ERP and Fusion Cloud HCM, the bank achieved a 51% reduction in manual tasks, significantly enhancing overall productivity.

Utilising Oracle Taleo Cloud, the bank efficiently manages its extensive workforce of over 30,000 employees, facilitating a smooth transition to a largely paperless HR system. Furthermore, the annual performance review process has seamlessly transitioned to a digital format.

As part of its cloud-driven evolution, the bank is also mandating the migration of all User Acceptance Testing instances to the public cloud, showcasing its dedication to modern and streamlined practices.

Zoho Recruit

The first version of Zoho Recruit was launched in 2009. Now, the latest version offers AI-based recruitment software for smart hiring. Basically, it assists in identifying top talent and conducting behavioural assessments of candidates. By utilising AI-powered tools, organisations can improve hire quality, reduce bias, and optimise time management for hiring teams.

Features like candidate matching, automated job description generation, and sourcing boosters enable quicker, more accurate candidate selection. Zoho also offers a resume parser mapping feature powered by AI and proprietary algorithms to help you standardise and align the parsed resume structure to the form supported by your organisation.

Furthermore, to enhance candidate communication and experience, Zia, Zoho Recruit’s chatbot, acts as your personal AI assistant, working in sync with Zoho Recruit’s Career Site to keep candidates updated on job availability and help them track their job applications.

HirePro

Based in Bengaluru, HirePro, founded in 2004, specialises in recruitment automation, offering enterprise-grade software to streamline hiring processes. Leveraging AI, they provide solutions for attracting, screening, assessing, interviewing, and onboarding talent.

They also specialise in addressing fraud in remote recruitment. Using Amazon Web Services (AWS), HirePro’s AI algorithms detect fraudulent activities during various stages of recruitment, ensuring secure processes. Employers have reportedly experienced shorter hiring cycles, reduced costs, and an improved quality of hires, enhancing brand equity through an AI-driven recruitment approach.

GetWork

GetWork.ai is a game-changer for recruiters. Particularly when they have to parse through thousands of applications that come in upon posting a job on the platform. To streamline the recruiter’s workload, they’ve developed a feature that showcases a candidate’s percentile match against their resume and the job description.

Once the score determined by the company is met, an AI voice bot swiftly conducts interviews at a staggering rate of 1000 calls per minute. Then, a recruiter can hire potential candidates in just one day.

Since 2019, GetWork has placed over 30,000 freshers from Tier 2 and 3 colleges with a network of 8,000 employers, including industry giants like Byju’s and Reliance.

PeopleStrong

PeopleStrong, a human capital management SaaS platform, recently unveiled a generative AI integration across its entire employee lifecycle.

It has begun to harness AI to simplify tasks throughout the employee’s journey, including generating OKRs, coaching new joiners with nudges, creating comprehensive reports, and offering recommendations drawn from existing enterprise-wide data.

In 2017, PeopleStrong also pioneered India’s first AI-powered chatbot, Jinie, followed by the introduction of the AI-driven Skill Cloud in 2022.

The post 6 Indian Generative AI Platforms for Recruiters and HR Professionals appeared first on AIM.

Indian IT is Training a GenAI Workforce to Eventually Replace Them With AI

Leveraging AI in Indian Rural education

Indian IT giants, including Infosys, TCS, and Wipro, have collectively trained over 825,000 employees in generative AI. However, an X user has revealed that the depth and quality of these training programs may be questionable, as a friend working at one of the largest IT companies in India completed a GenAI course in just an hour by clicking the “next” button hundreds of times.

The post received widespread agreement from people across India’s tech ecosystem. While few said that these are the people who would actually be replaced by AI, others suggested employing OpenAI’s new GPT-4o model for mundane tasks.

Impressive Numbers, But Questionable Depth

Moreover, Wipro’s new chief, Srinivas Pallia, stated that the company’s 225,000 employees have been trained in AI 101, a basic level of AI. The company is considering advanced courses for individuals based on the types of projects and proofs of concept (PoCs) they are involved in.

Infosys and TCS have also announced plans to train 100,000 and 150,000 employees, respectively, focusing on theoretical and practical aspects of GenAI through partnerships with industry leaders.

While these numbers are impressive, questions arise about the depth of skills, specialisation options, and real-world application integration. The recent revelation about the superficial nature of some training programs raises concerns about the actual readiness of the workforce for GenAI projects.

Underutilisation of Trained Workforce

Although Indian IT companies employ a substantial workforce compared to other major tech giants, there are concerns about the underutilisation of their highly-trained employees. This limitation may hinder the potential for innovation and the effective application of GenAI in real-world scenarios.

The recent revelation about the superficial nature of some training programs further underscores the need for a more comprehensive and rigorous approach to GenAI education within the Indian IT industry. Companies must ensure that their employees are not only trained in the basics but also equipped with the necessary skills and knowledge to effectively implement GenAI solutions in practice.

As the demand for GenAI expertise continues to grow, Indian IT companies must prioritise the development of a truly skilled and ready workforce to remain competitive in the global market. This requires a commitment to in-depth training, practical application, and continuous learning to keep pace with the rapidly evolving field of generative AI.

The post Indian IT is Training a GenAI Workforce to Eventually Replace Them With AI appeared first on AIM.

5 Indian Generative AI Platforms for Recruiters and HR Professionals

5 Indian Generative AI Platforms for Recruiters and HR Professionals

In an ever-evolving landscape of work, 2024 emerges as a year of hope with transformative potential. Amidst this backdrop, India stands tall as the latest ManpowerGroup Employment Outlook Survey suggests that 36% of companies are gearing up for recruitment drives between April and June.

Fueling this momentum are technological advancements, as various platforms turn to AI models to streamline the hiring process. The results?

Enter generative AI platforms where innovation meets necessity. The platforms present recruiters and HR professionals with an array of tools to navigate this season of talent acquisition.

We’re well aware that every business is unique, with its distinct recruitment processes. Thus, here are the latest solutions tailored to make the journey of 2024 recruitment seamless for all.

Oracle Recruiting

In 2018, Oracle Recruiting was launched as a part of the Oracle Cloud HCM suite. Now, it has integrated AI-driven features, allowing recruitment teams to attract top talent and swiftly fill job openings.

These features seamlessly integrate into existing HR workflows with the assistance of Oracle Cloud Infrastructure’s (OCI) generative AI services.

Deepa Param Singhal – vice president, Cloud Applications, Oracle India shares how AI is transforming the #HR function and bringing a paradigm shift in #employeeexperience, #productivity and #talent supply chain. #CloudWorld https://t.co/BzYWDLZbxH pic.twitter.com/INQkK9yoHb

— Oracle India (@Oracle_India) April 3, 2024

A standout success story is Kotak Mahindra Bank, a user of Oracle’s Fusion HCM. By harnessing OCI alongside Fusion Cloud ERP and Fusion Cloud HCM, the bank achieved a 51% reduction in manual tasks, significantly enhancing overall productivity.

Utilising Oracle Taleo Cloud, the bank efficiently manages its extensive workforce of over 30,000 employees, facilitating a smooth transition to a largely paperless HR system. Furthermore, the annual performance review process has seamlessly transitioned to a digital format.

As part of its cloud-driven evolution, the bank is also mandating the migration of all User Acceptance Testing instances to the public cloud, showcasing its dedication to modern and streamlined practices.

Zoho Recruit

The first version of Zoho Recruit was launched in 2009. Now, the latest version offers AI-based recruitment software for smart hiring. Basically, it assists in identifying top talent and conducting behavioural assessments of candidates. By utilising AI-powered tools, organisations can improve hire quality, reduce bias, and optimise time management for hiring teams.

Features like candidate matching, automated job description generation, and sourcing boosters enable quicker, more accurate candidate selection. Zoho also offers a resume parser mapping feature powered by AI and proprietary algorithms to help you standardise and align the parsed resume structure to the form supported by your organisation.

Furthermore, to enhance candidate communication and experience, Zia, Zoho Recruit’s chatbot, acts as your personal AI assistant, working in sync with Zoho Recruit’s Career Site to keep candidates updated on job availability and help them track their job applications.

HirePro

Based in Bengaluru, HirePro, founded in 2004, specialises in recruitment automation, offering enterprise-grade software to streamline hiring processes. Leveraging AI, they provide solutions for attracting, screening, assessing, interviewing, and onboarding talent.

They also specialise in addressing fraud in remote recruitment. Using Amazon Web Services (AWS), HirePro’s AI algorithms detect fraudulent activities during various stages of recruitment, ensuring secure processes. Employers have reportedly experienced shorter hiring cycles, reduced costs, and an improved quality of hires, enhancing brand equity through an AI-driven recruitment approach.

GetWork

GetWork.ai is a game-changer for recruiters. Particularly when they have to parse through thousands of applications that come in upon posting a job on the platform. To streamline the recruiter’s workload, they’ve developed a feature that showcases a candidate’s percentile match against their resume and the job description.

Once the score determined by the company is met, an AI voice bot swiftly conducts interviews at a staggering rate of 1000 calls per minute. Then, a recruiter can hire potential candidates in just one day.

Since 2019, GetWork has placed over 30,000 freshers from Tier 2 and 3 colleges with a network of 8,000 employers, including industry giants like Byju’s and Reliance.

PeopleStrong

PeopleStrong, a human capital management SaaS platform, recently unveiled a generative AI integration across its entire employee lifecycle.

It has begun to harness AI to simplify tasks throughout the employee’s journey, including generating OKRs, coaching new joiners with nudges, creating comprehensive reports, and offering recommendations drawn from existing enterprise-wide data.

In 2017, PeopleStrong also pioneered India’s first AI-powered chatbot, Jinie, followed by the introduction of the AI-driven Skill Cloud in 2022.

The post 5 Indian Generative AI Platforms for Recruiters and HR Professionals appeared first on AIM.

Meet AWS’ New CEO, Matt Garman, Filling Adam Selipsky’s Shoes

Adam Selipsky’s announcement this past week on stepping down as the CEO of AWS effective June didn’t come as a surprise to many. This is because the possibility of it being a short-term commitment in his second stint at the cloud giant was already discussed with former CEO Andy Jassy.

However, the timing of the announcement may have caught some off guard, especially considering AWS’ recent strong financial performance and ambitious expansion plans.

While Selipsky’s leadership was commended for steering AWS towards long-term customer-centric strategies, the company has faced some recent challenges. These include layoffs, criticisms about being slow to roll out generative AI services, and not being able to steer clear of competitors like Microsoft Azure, which is slowly eating into its cloud market share, while Google Cloud remains consistent.

Matt Garman Steps Up

The one stepping into the vastly important role – Matt Garman – isn’t new to AWS. The 48-year-old has been with the company for over 18 years, in different roles, joining as an intern in 2005. That was when Jassy hired Selipsky from a software company to oversee marketing, sales, and support for the new cloud computing initiative conceptualised a couple of years before that.

Garman has been positioned to lead AWS for years, given his long tenure and reputation as a protégé and part of Jassy’s inner circle.

Jassy brought Garman into his inner circle early on in the latter’s career and moved him through different areas of the company to broaden his experience.

In his early years, Garman held leadership roles within Amazon Elastic Compute Cloud (EC2), a key pillar of AWS’s $90 billion revenue business. He helped launch Amazon Elastic Block Store (EBS), a storage service.

Garman quickly developed a reputation for a strong work ethic—infamously staying awake for two days, running point on the effort to bring services back online during a major outage related to EBS in 2011.

Many employees saw Garman’s recent appointment as the head of AWS’s extensive sales and marketing team as a clear indication that the company was preparing him for the CEO role. They believed this position was just as much about marketing and sales as it was about technology.

Immediate Threats

Garman will have to use his vast experience and role in establishing the biggest cloud provider, his understanding of AWS’ data centres and cloud products, and his recent sales experience to fend off competition from Microsoft and Google.

While AWS’ first-quarter results showed an increase of 17% in revenue to $25 billion, Microsoft reported 31% revenue growth in its Azure and other cloud services.

This was despite their revenue being half as much as that of AWS’ server rentals even after including figures of Azure, SQL Server, Windows Server, GitHub, Microsoft Partner Network etc.

On the other hand, while Google Cloud’s $9.2 billion quarterly revenue was lower than AWS’ $24.2 billion and Microsoft’s $25.9 billion, it grew at a faster rate of around 26% year-over-year.

The pace of growth of these rivals, which aim to use new GenAI offerings like Gemini and Microsoft Azure OpenAI services to narrow the sales gap with AWS, is an obvious threat.

Garman faces a host of significant decisions, such as determining how AWS will keep pace with the massive data centre expansions of cloud rivals Microsoft and Google.

Garman’s best bet would be to consider an attempt to acquire or develop deeper ties with Anthropic, which could prove to be their ace against OpenAI and Google.

AWS and Google already offer Anthropic’s LLMs to their cloud customers, and AWS has committed $4 billion to the startup to spend on AWS cloud services.

However, Garman was among the leaders who played a part in declining an offer from Anthropic to establish a close partnership similar to the one between Microsoft and OpenAI in 2021.

For at least the past several months, Garman has regularly met with Jassy, Selipsky, and Amazon board directors to discuss strategies for growing AWS revenue, including how the unit will compete in the era of conversational AI.

Other Challenges

In addition to AI, Garman has another short-term challenge: increasing the number of Fortune 1,000 companies that choose AWS when they move workloads from their own data centres to the cloud.

These customers tend to buy cloud services differently from the startups and other technology companies that fueled AWS’ early lead, as it’s harder for older businesses to move to the cloud from their data centres. They have understandable concerns about hosting sensitive data in a cloud provider’s servers.

Earlier this year, Garman reorganised the sales team around specific industries, such as financial services, automotive, healthcare, and life sciences, to customise sales pitches to these firms.

The move also aimed to address deep-rooted problems related to conflicting strategies among sales leaders and customer complaints about the quality of service they experienced from sales and technical teams.

However, Garman’s reorganisation hasn’t gone as smoothly as expected. Some AWS sellers who were previously considered generalists moved into industry sales groups based on their proximity to certain customers, not their experience in those fields.

AWS is working to get these sellers up to speed in areas that include learning how industry-specific regulations impact customers’ use of the cloud.

Making AWS More Lucrative with GenAI Services

AWS primarily sells cloud computing instead of specific business applications, which has proven problematic when competing with Microsoft, which can pair cloud services with key business offerings.

This could soon change, as AWS is leaving no stone unturned in an effort to avoid conceding any territory. Just a day before the new CEO’s ascension, AWS made two personnel changes that could help push the company towards building apps.

An outgoing Selipsky, in a Monday note to staff, announced the creation of a new group called AWS Solutions, which Colleen Aubrey, a high-ranking Amazon advertising executive, would oversee.

This new group would combine a team developing applications such as call centre services and business messaging apps with other teams developing services for specific industries, including healthcare and life sciences.

In another development, Dilip Kumar, who helped develop Amazon’s Just Walk Out technology for retail stores, will now oversee Amazon Q, a tool customers can use to analyse their companies’ data by making requests conversationally rather than using complex database queries.

Q also allows customers to develop generative AI apps, such as a tool that creates onboarding plans for new hires.

However, Q has seemingly gotten off to a slow start since AWS announced it in December. But the company continues to believe in it and has said that companies such as Toyota, GoDaddy, and National Australia Bank use Q.

Jassy also recently created a team to develop large AI models that aim to compete with those from Anthropic and OpenAI, and he took numerous AWS AI developers for the new team.

Now, it is upon the incoming boss to determine how heavily AWS continues to invest in its generative AI push, including investments and acquisitions in startups, its AI server chips Trainium and Inferentia, and the pace at which it increasingly integrates generative AI services into its cloud and data centre offering.

However, it will be interesting to watch an energised and hungry company-veteran like Garman steer AWS through this AI arms race.

The post Meet AWS’ New CEO, Matt Garman, Filling Adam Selipsky’s Shoes appeared first on AIM.

Yotta Appoints Anil Pawar as Chief AI Officer, Marks Next Phase in AI Strategy

Yotta Data Services, has announced the appointment of Anil Pawar as the Chief AI Officer and Head of AI Cloud Business Unit. This move marks the next phase in Yotta’s AI strategy and builds on the company’s multi-million-dollar investment in its AI offerings.

In his new role, Anil will oversee strategic initiatives such as AI-as-a-Service (AIaaS), AI Platform-as-a-Service (AIPaaS), AI Software-as-a-Service (AISaaS), and the Large Language Model (LLM) marketplace within the Shakti Cloud Business Unit. He will report directly to Sunil Gupta, Co-founder, MD, and CEO of Yotta Data Services.

Anil’s appointment comes as Yotta transitions its focus towards cloud services platforms, such as Shakti and Yntraa, aligning with the company’s strategic shift. His most recent stint was as the Head of Tech Strategy at Rakuten Japan, where he oversaw AI programs and AI Cloud transformation initiatives. He has also been instrumental in establishing one of the world’s largest data-driven networks at Reliance Jio.

Commenting on the appointment, Sunil Gupta said, “Anil brings with him his rich experience in the field of AI and digital transformation. I’m optimistic that in his new role at Yotta, he will help strengthen our position and help our clients unlock their true potential.”

Pawar, an industry veteran with over 25 years of experience and multiple US patents in diverse communication technologies, expressed his excitement about joining Yotta. “The opportunity to leverage AI and cloud technologies to drive innovation and create value for customers is truly enticing. I look forward to working with the talented team at Yotta to deliver cutting-edge solutions and drive growth in the Shakti AI Cloud Business Unit,” he said.

With Pawar at the helm of its AI initiatives, Yotta aims to reinforce its commitment towards enabling India’s digitisation while driving growth in the Shakti AI Cloud Business Unit.

The post Yotta Appoints Anil Pawar as Chief AI Officer, Marks Next Phase in AI Strategy appeared first on AIM.