Data Science Hiring Process at Instahyre

Within half a year of 2023, over two lakh employees have been laid off. But amid this job loss plaguing the market, AI-powered HRTech platform Instahyre is making sure to provide the best opportunities.

In solving the pain of millions of job seekers, Instahyre’s data science team has successfully tackled one of their primary challenges by optimizing the job-matching process through ‘Instamatch’ their proprietary recommendation system. It improves the efficiency and effectiveness of the job search experience for both job seekers and employers.

“Instamatch has changed how companies approach hiring, changing the modus operandi from mass emails, keyword search, and unanswered phone calls to a holistic data-driven, tech-based candidate personality and company DNA mapping, which has taken candidate experience and hiring conversions to a whole new level, reducing the time and cost to hire drastically,” said Sarbojit Mollick, cofounder of Instahyre, in an exclusive conversation with Analytics India Magazine.

Founded in the year 2017 by Aditya Rajgarhia and Mollick, Instahyre’s adept use of AI, ML and data science in its operations has resulted in an optimised recruitment platform that provides personalised job matches, streamlined candidate evaluation, and improved efficiency for recruiters and candidates alike.

The company boasts about a 70% reduction in time to hire and cutting costs by thrice compared to traditional methods. With over 10,000 companies benefiting from their services and a staggering 40 million candidates on their platform, Instahyre has earned the trust of major industry players such as Amazon, Google, PayPal, Salesforce, Walmart, Oracle, Razorpay, Paytm, PhonePe, JP Morgan, Adobe, and Myntra.

Inside the AI and ML Operations of Instahyre

In its operations, Instahyre effectively implements AI and ML to harness the power of data science and derive valuable insights.

A prominent area where data science is applied at Instahyre is in candidate-company matching. Using InstaMatch, the data science team ensures that job seekers are matched with companies based on a comprehensive set of factors, including skills, experience, and individual preferences. This results in a more precise and personalized job-matching experience for users.

Additionally, Instahyre employs natural language processing (NLP) and machine learning algorithms to parse and analyse resumes and allows the platform to extract relevant information from resumes, allowing for a streamlined and time-saving candidate evaluation process for both job seekers and employers. The platform has been utilising generative AI for the past six years since its inception.

Moreover, Instahyre assists recruiters with tools like Instahyre Talent Insights, which provides a global overview of the talent pool for each job post. The AI-driven approach automates various aspects of the hiring process, including candidate sourcing, shortlisting, and scheduling interviews. It also conducts deliberate screening and assessment, leading to effective candidate evaluation and offer rollouts for chosen individuals.

Tech Stack Employed

Instahyre uses various technology capabilities, including an Application Tracking System (ATS) Integration. It has integrated with widely-used applicant tracking systems employed by employers and recruiters. This integration facilitates the smooth transfer of candidate data, updates application statuses, and streamlines the entire hiring process for the company.

Furthermore, the data science team at the company employs a range of tools, applications, and frameworks to tackle challenges and make informed decisions. These resources encompass MySQL, Python, Java, NLP, and other relevant technologies. By combining these tools, they aim to excel in data-driven endeavours and maintain a dynamic work environment.

Interview Process

Instahyre seeks potential candidates with specific expertise in different domains. For the ML Engineer role, they require proficiency in Python, NLP, and other deep learning concepts. On the other hand, for the Data Engineer position, the desired skills encompass Python, Java, and Scala, as well as experience with technologies like Hadoop, Spark, Kafka, MySQL, MongoDB, Cassandra, AWS, and Azure.

Their data science hiring process involves multiple steps to find the right candidates that begins with a thorough review of resumes, focusing on educational background, work experience, and essential skills in mathematics, statistics, programming, and machine learning. Shortlisted candidates then undergo a technical assessment, evaluating their abilities in data analysis, programming languages, statistical modeling, and problem-solving. Successful candidates proceed to a technical interview, where their expertise and problem-solving capabilities are extensively examined. Lastly, behavioural interviews assess candidates’ communication skills, problem-solving approach, and cultural fit within the collaborative team environment.

Expectations

Once selected, candidates joining the data science team at Instahyre can expect a dynamic and intellectually stimulating environment. They will have the opportunity to work on cutting-edge technologies, solve challenging problems, and collaborate with a highly skilled and diverse team. The company provides access to the latest tools and resources to support their work and encourages continuous learning and professional development.

“In return, Instahyre expects candidates to have a solid foundation in data science, including a strong understanding of statistics, mathematics, and machine learning algorithms, added Mollick.

Excellent programming skills, strong analytical thinking, problem-solving abilities, and effective communication of complex ideas are also highly valued qualities expected from candidates joining the data science team.

Mistakes to Avoid

When interviewing for a data science job at Instahyre, candidates should avoid some common mistakes that can tamper their chances. One mistake is not showing practical experience and how their work has made a real-world impact. Candidates should share specific project examples, challenges they faced, and the outcomes they achieved.

Another mistake is not preparing well for technical questions or lacking a good understanding of core data science principles. Being well-prepared and showing mastery of important concepts is important to impress the interviewers.

Candidates should highlight their practical experience with different datasets and explain how their work has made a difference. Instahyre values innovation, creativity, and a growth mindset, so applicants should try to embody these qualities

Work Culture

“Our work culture is positive and the best part is there is no micromanagement. Employees are trusted and given autonomy to handle their tasks, leading to increased productivity and accountability,” added Mollick. The company follows a remote work policy, allowing employees to work from their preferred locations, allowing Instahyre to tap into talent from diverse geographic areas.

“What sets Instahyre apart from its competitors, especially in terms of working with the data science team, is its collaborative and cross-functional approach, he added.

The company encourages collaboration between data science and other teams, fostering diverse perspectives and knowledge-sharing to solve complex problems. It also places a strong emphasis on innovation and continuous learning, offering opportunities for professional development and research activities.

So if you want to work on impactful projects that directly benefit users and the recruitment industry as a whole but also grow professionally and personally, Instahyre is the place for you. Check out their careers page now.

Read more: Data Science Hiring Process at Naukri.com

The post Data Science Hiring Process at Instahyre appeared first on Analytics India Magazine.

Meta Hopes a PyTorch-like Success for Llama 2

Meta Hopes a PyTorch-like Success for Llama 2

When it comes to open source AI, there is no doubt that Meta has been leading the way. For nearly a decade, Meta AI has been a pioneer in open-sourcing high-impact AI-related tools and technologies, including notable contributions such as PyTorch, React, and the original LLaMa model. Now, with Llama 2, the company is taking further steps to bolster its open-source stance.

This approach has provided Meta with widespread adoption of their technology, with developers trying out the model in every way possible, just like PyTorch. But for Meta, the benefits still remain to be seen.

Facebook (Meta) gives us:
– React
– PyTorch
– LLaMA 2
All can be used commercially.
Zuck is based.

— Arjuna Sky Kok (👉, ⚡) (@arjunaskykok) July 18, 2023

Interestingly, a lot of companies have already started to adopt Llama 2 within their systems. For example, Perplexity AI, LunaAI, Poe, and others have integrated Llama 2 into their chatbots. And many other smaller companies have started leveraging Llama 2 with their proprietary data through platforms like DataBricks and Amazon SageMaker. Siding with the open source community, Meta seems to be playing the good guy by allowing others to use something they built very dearly to be used commercially by others. How exactly does Meta profit with this?

The Meta Business Strategy is Risky

If we take example of Threads, the Twitter alternative that the company released recently, it has been so dead within a month of its launch that the company is trying to introduce AI personas to keep the users from continuing using it. Same is the case with the Metaverse, where more than Meta, people have been thinking about Apple these days with the announcement of the Vision Pro headset. In AI, the company is hell bent on siding with open source instead of its customer facing products.

This is probably because the company’s PyTorch’s open-sourced nature helped it gain widespread adoption, making it one of the most popular deep learning frameworks in the world. Same is the case with React. Llama 2’s open-source release aims to continue this legacy and advance the boundaries of AI research while fostering collaboration and further innovation.

Every new development in PyTorch by Meta has been driven by a non-profit approach by the company. For example, with the introduction of the PyTorch Foundation for accessing SOTA AI models, the company decided to do it openly on Linux for transparency. The GitHub repository already boasts more than 150,000 projects built on it. Similarly, the number of projects on Llama 2 is also increasing on Hugging Face.

On the other hand, to make profit out of putting Llama 2 out in the public and keeping it restricted for big tech, Mark Zuckerberg said, “the largest companies with public cloud offerings don’t just get a free licence to use this. They will need to make a business arrangement with us”. This seems to be the way forward for Meta with Llama 2, considering Zuckerberg also thinks that Meta is four years behind OpenAI.

FAIR has been open-sourcing high-impact AI-related stuff for almost 10 years now.
That includes PyTorch & LLaMA.
More to come…. https://t.co/HG2hitRBnk

— Yann LeCun (@ylecun) May 7, 2023

Regardless, Yann LeCun’s endorsement of Llama 2 only reinforces the potential impact that this open-source language model can have on the AI community. By providing free access to Llama 2’s capabilities, Meta is enabling startups and smaller organisations to leverage state-of-the-art AI technology, levelling the playing field and empowering the next generation of AI-driven applications. Meta hopes that it will be able to fix a lot of problems with the current LLM models, drive research, while also making a few bucks for itself, while doing everything transparently.

Meta probably does not care about revenue from generative AI

Meta’s biggest source of revenue is its advertising business. With its simple integration of AI into all its apps, it might be able to make the best out of AI right now. The company also reported its most profitable quarter after 2021. On the other hand, companies like Google and OpenAI can keep progressing towards building their models better.

The success of PyTorch serves as a compelling example of how Meta’s open-source strategy can lead to significant achievements. As an open-source deep learning framework, PyTorch gained rapid popularity within the AI community. Its user-friendly interface, dynamic computation graph, and strong community support made it a favourite among researchers and developers alike. Consequently, PyTorch became the go-to platform for numerous AI projects and research initiatives, cementing its position as one of the leading frameworks in the field.

Another avenue is leveraging the open-source model to attract top talent and foster innovation within Meta itself. Access to Llama 2’s codebase can entice AI experts to work with Meta, further strengthening the company’s AI research and development efforts. Additionally, the adoption of Llama 2 by external developers can lead to broader integration into various products and applications, potentially increasing Meta’s visibility and influence in the AI ecosystem. That is all it wants at the moment.

Conclusively, building on the success of PyTorch, Llama 2 is poised to become a driving force behind groundbreaking AI projects, inspiring the next generation of researchers, developers, and entrepreneurs. As Meta continues to embrace the open-source approach, it sets the stage for a more inclusive and thriving AI community, where accessibility, collaboration, and shared knowledge drive the future of AI.

The post Meta Hopes a PyTorch-like Success for Llama 2 appeared first on Analytics India Magazine.

6 Worldcoin Alternatives You Should Know 

Worldcoin has been hogging all the limelight since its launch last week with people rushing to scan their irises with a shiny orb. While crypto tokens is one aspect of the company, the company has a higher goal. The social goal of providing universal basic income to people living in a dystopian world where AI will kill jobs is one of the driving forces for Worldcoin to operate. And, to make it feasible, an identifier to prove that you are a human will become a necessity – a proof of personhood will be crucial.

While the thought may seem futuristic, Worldcoin is not the only company providing a digital passport-kind of tool for providing human identifier elements. Here is a list of companies that provide proof of personhood through either a social-graph based method or via biometrics.

Proof of Humanity

A decentralised identity project built on Ethereum blockchain, the project aims to establish a global, sybil-resistant (preventing creation of multiple fake identities by a single entity) identity system that combines the process of social verification through video submissions where individuals will verify their identity. It can be used as a point-of-entry for cases where individuals need to prove they are real humans and not bots. It can also be plugged into a variety of applications that would require such forms of identity systems.

BrightID

BrightID is a private, decentralised open source technology that helps with identity verification. It uses a social graph without storing personal information and enables fair access to apps with just one account. The company believes in growing a ‘free and democratic society’ while protecting one’s privacy.

Rollup Id

Since the launch of Worldcoin, the company has been vocal about calling themselves as a ‘compelling alternative’ to Worldcoin. Rollup ID that helps create digital identity- a digital passport of sorts for an individual, allows applications to customise the platform to their specific needs. The platform provides a range of programming language support for integrating with other applications including OpenID Connect (OIDC) and OAuth2 protocols. Rollup id does not use biometrics for verification and empowers users to control their online identities.

Setup your passport now and we won't scan your eyeballs! 🙈 https://t.co/ciPapx1IBw

— Rollup ID (@RollupID) July 27, 2023

Idena

Idena is a proof-of-person blockchain where every node is connected to a crypto identity that represents a single individual who possesses equal voting power – making it a decentralised blockchain in all forms. To prove one’s identity, there is no requirement for revealing any personal information including KYC procedures. With privacy- focussed attitude, the company ensures data remains secure when participating in the Idena ecosystem.

Gitcoin Passport

Gitcoin passport is an app that is built on Ceramic network (decentralised, open-source protocol). It brings together identity confirmations called ‘stamps’ from regular websites (Web2) and blockchain-based services (Web3). These stamps can be obtained by linking accounts to the Gitcoin passport system and connecting this ‘passport’ to various apps and communities. The system also does not save any personal information .

Circle

Built by Circle, Verite is a free and open-source system that allows identity claims to be verified in Web3 without revealing personal information. Web applications, mobile apps, and smart contracts can easily confirm the identity of participants using Verite credentials. Verite credentials are stored in a secure crypto wallet and individuals can have control over how their identity attributes are shared. It is decentralised with no single entity having authority over usage or development.

1/ Introducing Verite, an open-source framework for decentralized identity. 🧵👇https://t.co/phrdgSq6a6

— Circle (@circle) February 17, 2022

The post 6 Worldcoin Alternatives You Should Know appeared first on Analytics India Magazine.

As spend management space heats up, Brex and Rho turn to AI startups to help power new products

As spend management space heats up, Brex and Rho turn to AI startups to help power new products Mary Ann Azevedo 9 hours

The competition in the spend management space continues to intensify.

Brex and Rho today each announced AI-powered/enabled accounts payables offerings.

Their announcements coincidentally came out the same day competitor Ramp announced it had expanded into procurement — further evidence that the companies in the space are clamoring to not only meet customer demand but presumably attempt to outdo each other in terms of what they can offer their customers to help control spend.

Specifically, Brex today revealed Payables, its AI-enabled Accounts Payable (AP) offering, while Rho announced new AI-powered Accounts Payable automation capabilities. Brex’s offering is live today while Rho said its new capabilities will be live later this month.

Via email, Brex co-CEO and co-founder Henrique Dubugras told TechCrunch that launching the new product had been “in the works” since the startup started building Empower, its spend management platform, over a year ago.

He noted that while Brex has used artificial intelligence for years in various capacities such as customer support and underwriting, what is new now is that it partnered with “multiple” machine learning companies such as Scale AI and Photon “to drive the highest accuracy of information extracted from invoices.”

Prior to this launch, Dubugras said that Brex offered a lighter version of bill pay that gave customers the ability to send scheduled and recurring payments. Now, he said they will “have even more advanced spend controls with multi-level approvals.”

For its part, Rho said it is offering AI-powered invoice and bill processing to its clients. Specifically, invoices sent to a designated AP inbox will “undergo automatic digitization” powered by generative AI technology.

In a statement, the company said the process “transforms the invoice into a bill and creates a corresponding liability in the client’s integrated ERP system. Clients can then authorize bill payments through Rho one by one or in bulk, with liabilities automatically marked as paid in the ERP.”

Rho CEO Everett Cook told TechCrunch via email that the new capabilities had been in the works for nearly a year, building on the company’s initial accounts payable release in 2021. Rho has partnered with OpenAI — a portfolio company of Rho investor DFJ Growth.

With the new product, he claims, customers will be able to “configure one-click workflows that help finance teams process thousands of payables in seconds.”

“Our position on generative AI is that it is only useful if it is grounded in tangible business value,” said Rishav Chopra, SVP of product & design at Rho.

Large opportunity

Besides wanting to better compete, both Brex and Rho expect their new offerings to increase revenue for their respective companies.

Dubugras said the new payables product should increase the percentage of customers’ spend processed via Brex.

“As a result, some of that spend will be on their Brex card, one way in which Brex earns revenue,” he told TechCrunch. “Plus, using a Brex business account for bill pay, another way in which Brex earns revenue, allows customers to send payments faster, eliminating ACH delays while also earning passive yield.”

Brex claims that it is unique relative to other companies in the market in that it is “the only player” with its own business account that can earn revenue in this way, allowing the company to offer payables for free. (TechCrunch has not independently verified this claim.)

Meanwhile, Rho’s Cook believes that while the “timing is pretty coincidental” with Brex’s announcement, he supposes each of their customers were telling them “the same things” — that “they’re fed up with their legacy AP providers and want a modern solution that’s directly integrated with the rest of their finance stack.”

Legacy providers include the likes of Bill.com and Concur.

Dubugras believes there is a lot of competition in the space for a very good reason, telling TechCrunch: “The spend management space is very dynamic and that is because the opportunity is so large across SaaS and payments. Beyond the noise there is still a lot of differentiation between the players.”

Rho’s Chopra also believes that the current macro environment has led to increased pressures on the part of CFOs and finance teams “to move faster than ever and operate leaner.” This in turn has — for obvious reasons — created more demand for spend management products.

Want more fintech news in your inbox? Sign up for The Interchange here.

Gen AI to Increase US Production — With Caveats

An AI hand and a human hand touching a brain.
Image: peshkova/Adobe Stock

Generative AI will change, but not replace, many of the jobs now held by employees in the U.S., McKinsey found in its 2023 report “Generative AI and the future of work in America.”

Jump to:

  • Generative AI could open up more time for high-value work
  • It’s more complex than ‘AI will take jobs’
  • Demand for STEM workers expected to increase

Generative AI could open up more time for high-value work

Generative AI may increase U.S. labor production, McKinsey found. It could increase productivity by 0.5% to 0.9% annually through 2030. Generative AI is included in a pool of wider automation technology in the survey as well. Total productivity growth could increase by 3% to 4% annually with all automation technologies taken into account.

However, there are caveats. The 3% to 4% estimate depends on a “midpoint” of adoption, which will require enthusiastic adoption by both public and private stakeholders. Another possible pain point is the time it will take employees to learn and become comfortable with new tools.

The report pointed out the importance of guidelines like the White House’s voluntary agreement with generative AI makers and the industry’s Frontier Model Forum.

SEE: Technology Council of Australia and Microsoft say AI will transform the country — while Microsoft has a vested interest in OpenAI (TechRepublic)

“When machines take over dull or unpleasant tasks, people can be left with more interesting work that requires creativity, problem-solving, and collaborating with others,” wrote report authors Kweilin Ellingrud, Saurabh Sanghvi, Gurneet Singh Dandona, Anu Madgavkar, Michael Chui, Olivia White and Paige Hasebe.

“Workers will need to gain proficiency with these tools and, importantly, use the time that is freed up to focus on higher-value activities. When managers automate more of their administrative and reporting tasks, for example, they can spend more time on strategic thinking and coaching,” the McKinsey report stated.

It’s more complex than ‘AI will take jobs’

The fields most exposed to generative AI are expected to add jobs through 2030, albeit slower than comparable jobs, the report found.

STEM jobs will see both increasing demand for workers and a high degree of changes to their day-to-day work due to AI and automation, McKinsey said. How exactly everyday work will change depends on both individual choice and industry trends. “The biggest impact for knowledge workers that we can state with certainty is that generative AI is likely to significantly change their mix of work activities,” the report stated.

The fields most likely to be both affected by AI and see a decrease in demand are customer service/sales and office support. However, several fields that can expect to see tasks transform as generative AI is applied to them — STEM and also business/legal professionals, creatives and educators — will see an increase in demand.

Figure A

Comparison of which jobs will be most affected by AI and which will see higher demand for labor.
Comparison of which jobs will be most affected by AI and which will see higher demand for labor. Image: McKinsey & Company

Automation overall was estimated to take over tasks, accounting for 21.5% of the hours worked in U.S. jobs by 2030. Including generative AI in that mix increases the share of automated time to 29.5%.

Countries across the world are working on establishing regulations around generative AI for other reasons outside of the shift in labor, including concerns about privacy, inaccuracies and use of copyrighted material.

Demand for STEM workers expected to increase

“There’s a big fear that AI is going to eliminate lawyers or writers, or, [that] it’s going to take a lot of those jobs,” said Sanjay Poonen, CEO of Cohesity (and former COO of VMware), in an interview with TechRepublic. “It’s the same way as when everyone thought about the automobile or anything that came in that ultimately made people more productive.”

While healthcare is the area expected to open up the most jobs in the future, tech is also hiring, despite the high-profile layoffs this year. McKinsey estimates a 23% increase in demand for STEM jobs by 2030. After all, it reasons that “companies of all sizes and sectors” need people who know tech and those who can help expand companies’ existing digital transformations. Tech workers will be in demand in fields like banking, insurance, pharmaceuticals and healthcare.

“I view generative AI especially as an extremely powerful tool to help you get a first draft of something better,” Poonen said. “If generative AI and ChatGPT could be used to help me get a first draft, it makes me more productive as opposed to being viewed that technology eliminates somebody’s job.”

Subscribe to the Innovation Insider Newsletter

Catch up on the latest tech innovations that are changing the world, including IoT, 5G, the latest about phones, security, smart cities, AI, robotics, and more.

Delivered Tuesdays and Fridays Sign up today

KDnuggets Top Posts for June 2023: GPT4All is the Local ChatGPT for your Documents and it is Free!

XXX

It's time for the top posts of June 2023! Check out what people were checking out of what was published in June.

As a reminder: top posts are defined as the posts with the highest number of views normalized over the first 14 days of post publication.

Check them out below!

  1. GPT4All is the Local ChatGPT for your Documents and it is Free! by Fabio Matricardi
  2. Falcon LLM: The New King of Open-Source LLMs by Nisha Arya
  3. 10 ChatGPT Plugins for Data Science Cheat Sheet by KDnuggets
  4. ChatGPT for Data Science Interview Cheat Sheet by KDnuggets
  5. Noteable Plugin: The ChatGPT Plugin That Automates Data Analysis by Cornellius Yudha Wijaya
  6. 3 Ways to Access Claude AI for Free by Abid Ali Awan
  7. What are Vector Databases and Why Are They Important for LLMs? by Nisha Arya
  8. A Data Scientist’s Essential Guide to Exploratory Data Analysis by Miriam Santos

Thanks to everyone who contributed, and looking forward to seeing what the top posts of July are!

More On This Topic

  • GPT4All is the Local ChatGPT for your Documents and it is Free!
  • 5 Free Data Science Books You Must Read in 2023
  • Top Free Data Science Online Courses for 2023
  • 5 Free Books on Natural Language Processing to Read in 2023
  • 5 Free Courses on ChatGPT
  • Top Free Resources To Learn ChatGPT

Dataiku and Databricks Survey Highlights Booming ROI of AI and Future Challenges

Dataiku and Databricks Survey Highlights Booming ROI of AI and Future Challenges August 1, 2023 by Jaime Hampton

(Alexander Supertramp/Shutterstock)

The rapid adoption of generative AI appears to be having a transformative effect on businesses, according to a new joint survey released by Dataiku and Databricks.

The "AI, Today" survey report found over 70% of senior AI professionals report seeing a positive ROI for data science, analytics, and AI initiatives. Over half (54%) reported seeing at least a 2x return and 5% say they saw a return greater than 10x.

Ever since the release of OpenAI’s ChatGPT, generative AI has hit the mainstream with much fanfare. But Dataiku and Databricks wanted to know if data and AI professionals share the same enthusiasm. The authors also wanted to understand the larger AI environment and designed the survey to uncover insights regarding tech stacks, AI tools and services spending, and use case reach.

Those leading the charge with data, analytics, and IT functions will ultimately transform generative AI from a novel technology into a day-to-day reality in the workplace, the report states. Three in five survey respondents say companies need AI to realize the full value of their data.

(Source: Dataiku/Databricks)

Businesses are embracing generative AI, as 64% of those surveyed reported planning to use generative AI or large language models over the next year, with 45% already experimenting with it. Investment in AI resources is also heavy, the survey found, with 47% reporting AI tools budgets over $5 million for the next year and nearly one in five (17%) reporting budgets over $20M.

Dataiku’s Chief AI Strategist Jepson Taylor said AI and data science are delivering results, despite facing high expectations and unprecedented market attention.

“AI Pioneers are seeing clear bottom-line results from AI that would impress any CFO. Our survey reveals that where they need the most help is the right AI tools and platforms to convert their mountains of data into market-moving results faster than ever while protecting against potential risk,” Taylor said in a statement.

When it comes to AI adoption, those surveyed cited issues like data quality and speed of deployment as their top roadblocks. AI leaders saw cost and lack of business use cases as the least important barriers to delivering more value from AI, the survey noted.

(Source: Dataiku/Databricks)

The Dataiku and Databricks report also compared AI pioneers, or those seeing strong returns from AI implementations, to AI laggards, or those still experimenting with AI or seeing lower returns. The survey showed that AI pioneers were 62% more likely to report having at least one data leader in their C-suite (70%) compared to those at nascent organizations (43%). Those lagging behind were 43% more likely to say they do not have clear owners responsible for data quality, at 39% of pioneers versus 56% of laggards.

The rapid proliferation of generative AI has shown a need for caution, as AI models are capable of generating text and images that are inaccurate, biased, or harmful. The survey found that 55% of AI leaders reported that fears around AI are justified and that they’re more worried than excited about the future of AI. Additionally, the same percentage (55%) called for official regulation of AI.

“These findings demonstrate the significant interest in generative AI but also the accompanying challenges, from data access and privacy to regulation,” said Prem Prakash, Head of AI Marketing at Databricks. “As we navigate these challenges, our shared mission with Dataiku stands firm: to democratize data and AI, enabling every organization to build their generative AI solutions securely and cost-effectively.”

Related

Not all early-stage AI startups are created equal

Not all early-stage AI startups are created equal Rebecca Szkutak 8 hours

The AI sector has gotten hotter over the last year. But unlike many of past venture fads — like crypto or web3 — the AI sector had a number of large startups and legacy players already active when the market started to froth.

There have been AI exits and there are even whiffs of potential government regulation. This dynamic makes it a much more complex ecosystem for founders and investors alike — especially considering many of them weren’t paying attention to AI even a year ago.

Entrepreneurs have flocked to the sector, and early-stage investors are trying to cut through the noise to find which startups are merely riding the hype and which have the potential to grow into substantial companies.

One thing, not unlike other sectors, is that investors are looking for companies with a moat, or competitive advantage over rivals. With deep-pocketed players like Microsoft, Google and OpenAI also actively building in the category, investors want to make sure they aren’t backing companies that could be made irrelevant by the actions of one of the larger entities.

Chris Wake, the founder and managing partner at Atypical Ventures, told TechCrunch+ that while his firm is currently taking a step back from AI to see how things play out, he doesn’t see much appeal of startups that are building on top of existing large language models.

“Building on someone else’s model to solve a business problem, you [have to] understand it’s a race to the bottom,” Wake said. “You can create an interesting business but not necessarily a transformative business. For me, that doesn’t seem incredibly interesting.”

The Debate Around AI Ethics in Australia is Falling Far Behind

A cube with AI written on it.
Image: Shuo/Adobe Stock

AI tools are undeniably useful, and that’s why AI development and use is accelerating exponentially. AI is now used for everything from research to writing legal arguments, from image generation and storytelling for artists through to supporting coders.

However, as useful as these tools are, AI presents a severe ethical concern, and while AI tools are being pushed out for public use in massive numbers right now, the discussions around AI ethics remain just that — discussions with little regulatory push behind them. Many nations, such as Australia, want to look at regulating AI, but such regulation is still a way off, and it’s questionable just how much high-value “innovation” these regulations would limit in the name of ethical best practice.

SEE: Take advantage of TechRepublic Premium’s AI ethics policy to help you implement the best strategy for AI in your business.

Consequently, while the ethical consumption of technology, generally, becomes an ever-greater priority, AI in contrast has become something of a wild west, where too much of the ethical boundaries are decided by the individual’s own ethical center.

From issues of plagiarism and the potential for AI to compromise the integrity of academic research to biases leading to discrimination and the potential for job losses and even deaths being caused by AI, we as a society need better ethical frameworks to be developed around AI.

We need this to happen quickly because, while we might not be headed directly to Skynet, AI is going to have a massive role in shaping our futures. We need to make sure that the ethical foundations that these applications are built on are properly considered, first, before we allow AI to “take over” in any meaningful context.

The ‘lost decade’ for AI ethics

In 2016, the World Economic Forum looked at the top nine ethical issues in artificial intelligence. These issues have all been well-understood for a decade (or longer), which is what makes the lack of movement in addressing them so concerning. In many cases, the concerns the WEF highlighted, which were future-thinking at the time, are starting to become reality, yet the ethical concerns have yet to be actioned.

Unemployment: What happens after the end of jobs?

As the WEF flags: “Look at trucking: it currently employs millions of individuals in the United States alone. What will happen to them if the self-driving trucks promised by Tesla’s Elon Musk become widely available in the next decade?”

Meanwhile this year, a paper was published that acknowledges that there are insufficient job alternatives for the displacement of 50% of truckers. Job losses from AI — particularly in fields where the working base tends to be older or have lower educational qualifications — are an ethical concern that has been long known, yet across the world, policymakers and private business alike have shown little urgency in assisting affected individuals in reskilling and finding new opportunities.

Inequality: How do we distribute the wealth created by machines?

The WEF acknowledges that there’s potential for AI to further concentrate wealth. After all, AI works for a fraction of what skilled employees do, and it won’t unionize, take sick days or need to rest.

By tasking AI with work while cutting the total size of the workforce, companies are creating a better profit position for themselves. However, this isn’t a benefit to society unless the wealth is transferred back into it.

“AI will end the West’s weak productivity, but who exactly will benefit,” as noted in The Guardian.

One solution would be for governments to move away from taxing labor and instead directly taxing AI systems. The public wealth generated by doing this could be used to provide those out of work or moved into lower-paid jobs with necessary income support. However, even as jobs are already being impacted, there is no sign of even a debate to transform the taxation system in kind.

Bias: How do we address bias and potential racism and sexism generated by AI applications?

The WEF noted the potential for AI bias back in its initial article, and this is one of the most talked-about and debated AI ethics issues. There are several examples of AI assessing people of color and gender differently. However, as UNESCO noted just last year, despite the decade of debate, biases of AI remain fundamental right down to the core.

“Type ‘greatest leaders of all time’ in your favorite search engine, and you will probably see a list of the world’s prominent male personalities. How many women do you count? An image search for ‘school girl’ will most probably reveal a page filled with women and girls in all sorts of sexualised costumes. Surprisingly, if you type ‘school boy,’ results will mostly show ordinary young school boys. No men in sexualised costumes or very few.”

It was one thing when these biases were built into relatively benign applications like search engine results or when they simply delivered a poor user experience. However, AI is being increasingly applied to areas where bias has very real, potentially life-altering consequences.

Some argue that AI will result in a “fairer” judicial system, for example. However, the in-built biases of AI applications, which have yet to be addressed despite a decade of research and debate, would suggest a very different outcome than fairness.

Theft: How do we protect artists from having their work and even identities stolen by those using AI applications?

As UNESCO noted, in 2019, Huawei used an AI application to “complete” the last two movements of the unfinished Franz Schubert Symphony No.8. Meanwhile, AI is being used to create voicebanks that allow users to create speech from deceased celebrities such as Steve Jobs. One of the key motivating factors behind the recent actors and screenwriters’ strikes has been concerns that AI will be used to mimic them for projects that they won’t earn money from, even after they’re dead and without direct consent.

Elsewhere, an AI artist used generative AI to create a cover for a video game rather than an original image, as the publisher had commissioned. There was also some fierce backlash at the world’s largest online artists community, DeviantArt, for allowing AI algorithms to gather data from artists without the artist’s permission.

The debate over just where the line lies between what is and isn’t acceptable use of creative rights by AI is raging, loudly and vocally. And yet in the meantime, while that debate is still ongoing, AI developers are releasing their tools that enable a laissez-faire approach for AI artists, while governments continue to rely on antiquated and inadequate IP rights, created before AI was even a concept, to regulate the space.

Disinformation: How do we prevent AI from being used to further the spread of disinformation?

“AI systems can be fooled in ways that humans wouldn’t be,” the WEF report from 2016 noted, “For example, random dot patterns can lead a machine to “see” things that aren’t there.”

Just recently, research found that ChatGPT went from correctly answering a simple math problem 98% of the time to getting it right just two per cent of the time. Earlier this year, a new Google AI application made a critical factual error and it wiped $100 billion off the company’s market value.

We live in an era where information moves quickly and disinformation can affect everything, all the way up to critical election results. There are some mild attempts by some governments to introduce disinformation laws. Australia, for example, is keen to target “systemic issues which pose a risk of harm on digital platforms,” but AI applications are being released into the wild with no obligation to be accurate in the information that they present, and the big problem with disinformation is that once it has influenced someone it can be difficult to correct the record.

A common pattern for a critical problem

These are only five examples of AI impacting jobs, lifestyles, and people’s very existences. In all these examples, the ethical concerns about them have been well known for many years, and yet, even though the ethical debate hasn’t been settled, the release of these applications has gone on uninhibited.

As the popular saying goes “It’s better to ask forgiveness than permission.” That seems to be the approach that those working in AI have taken with their work. The problem is that this quote, originally from Admiral Grace Hopper, has been wildly misinterpreted from its original intent. In reality the longer that AI development goes without being answerable to ethical considerations — the longer they’re allowed to ask for forgiveness rather than for permission — the more difficult it will be to walk back the harm that the applications are causing in the name of progress.

Subscribe to the Daily Tech Insider AU Newsletter

Stay up to date on the latest in technology with Daily Tech Insider Australian Edition. We bring you news on industry-leading companies, products, and people, as well as highlighted articles, downloads, and top resources. You’ll receive primers on hot tech topics that are most relevant to AU markets that will help you stay ahead of the game.

Delivered Thursdays Sign up today