IBM Watsonx is Tailored Specifically For Enterprises

IBM emerges as a top contender to gain from generative AI, according to a recent study. IBM’s generative AI offering, Watsonx, positions the tech giant as a significant beneficiary of the emerging technology. “With Watsonx, enterprises can scale and accelerate the impact of advanced AI by accessing a full technology stack for training, tuning, and deploying AI models,” Geeta Gurnani, IBM Technology CTO and technical sales leader, IBM India & South Asia, told AIM.

Generative AI gained momentum after the launch of ChatGPT with its ability to converse almost like a human. IBM, however, on the other hand, has been working on Watson for more than a decade, which was initially designed as a question answering computer system. Hence, it could be alright to say that IBM already had the premise in place, to bring generative AI capabilities to its customers. Gurnani states that while many generative AI tools primarily focus on consumers and social purposes, Watsonx is tailored specifically for enterprises.

“Generative AI opens doors to improved learning, productivity, and innovation across industries. Business leaders are excited about the prospect of using foundation models and machine learning with their own data to accelerate generative AI workloads. Watsonx allows just that, enabling them to deploy models for various enterprise use cases that could range from improving IT operations to enhancing the HR function, and everything in between,” Gurnani said.

Watsonx

With Watsonx, IBM is offering its customers an AI development studio with access to IBM-curated and trained foundation models and open-source models, access to a data store to enable the gathering and cleansing of training and tuning data, and a toolkit for data and AI governance. “We have been a leader in the work of foundation models – and Watsonx is IBM’s push to put state-of-the-art foundation models in the hands of businesses. We are going beyond capabilities that focus on generating the next word or image in a sequence – and rather building and applying foundation models for entirely unexplored business domains such as geospatial intelligence, code, and IT operations.”

Further, explaining with the example of IBM’s collaboration with NASA, Gurnani said that IBM announced the joint development of large-scale geospatial AI models for NASA’s earth science satellite data and a large language model for its earth science literature. “This is the first-time foundation models have been applied to NASA’s satellite data. This could potentially help estimating climate-related risks to agriculture, monitoring forests for carbon-offset programs, and developing predictive models to mitigate and adapt to climate change.”

IBM’s customers will be able to leverage open-source models on the Hugging Face platform, along with the foundational models developed by IBM. “The collaboration with Hugging Face enables IBM’s customers to benefit from open-source models trained on accessible datasets, running within a secure environment with compliance and proper data governance. It expands the range of models and architectures available, allowing clients to leverage the best AI capabilities for their specific business requirements,” Gurnani said.

Foundational models powering Watsonx

With Watsonx, IBM’s customers will also get access to IBM’s foundational models, which use a large, curated set of enterprise data backed by a robust filtering and cleansing process and auditable data lineage. “These models are being trained not just on language, but on a variety of modalities, including code, time-series data, tabular data, geospatial data, and IT events data.”

However, not everyone will have access to these models as of yet. IBM plans to allow selective access to these models to a few of its clients and will be available in beta version to begin with, however, soon, the models will be made available for all its customers. IBM’s foundational models have been categorised in three main segments. Firstly, called fm.code, these models are built to automatically generate code for developers through a natural-language interface to boost developer productivity and enable the automation of many IT tasks.

Secondly, fm.NLP, is a collection of large language models (LLMs) designed for specific or industry-specific domains that utilise curated data where bias can be mitigated more easily and can be quickly customised using client data. Lastly, fm.geospatial, is a model built on climate and remote sensing data to help organisations understand and plan for changes in natural disaster patterns, biodiversity, land use, and other geophysical processes that could impact their businesses.

Mitigating generative AI risk

While most enterprises are eager to leverage generative capabilities, it comes with its own sets of challenges, for example, hallucinations. “At IBM, we are actively addressing challenges such as hallucinations and security while ensuring the ethical and responsible use AI within Watsonx.”

According to Gurnani, IBM is reducing the risk of hallucination using retrieval augmented generation, which would enable models to retrieve relevant data from a knowledge corpus before generating an answer. “Users also have the ability to tune existing models to perform specific tasks using domain-specific datasets, which can also help reduce the risk of hallucination.”

Moreover, she adds that for any technology to be widely used, its output must be trusted by users. Hence, IBM is introducing an AI governance toolkit which is expected to be generally available later this year, which will help operationalise governance to mitigate the risk, time and cost associated with manual processes and provides the documentation necessary to drive transparent and explainable outcomes. “It will also have mechanisms to protect customer privacy, proactively detect model bias and drift, and help organisations meet their ethics standards.”

The post IBM Watsonx is Tailored Specifically For Enterprises appeared first on Analytics India Magazine.

How will AI impact your industry? Pew Research has answers

Bar chart

A big concern about AI is how it will impact the workforce, specifically its potential to replace humans. However, AI is likely to affect some industries more than others, depending on the nature of the role.

The Pew Research Center analyzed federal data to learn which workers and industries are the most likely to be replaced or aided by AI and found that a whopping 19% of Americans were in such roles.

Also: Another major university is supporting generative AI use but with serious guardrails

To find out if yours made the list, read on.

If your job requires a bachelor's degree or higher, you are likely on the list. The study found that workers with a bachelor's degree or higher were more than twice as likely than workers with just a high school diploma to see AI exposure.

Specifically, the industries of high exposure, or those the most likely to be replaced or aided by AI, were budget analysts, data entry keepers, tax preparers, technical writers, and web developers.

These industries are the most affected likely because of generative AI's abilities to do analytical tasks well, such as coding, writing, mathematics, and more.

Also: The best AI chatbots

Medium exposure industries include chief executives, veterinarians, interior designers, fundraisers, and sales managers.

Lastly, the low-exposure industries include barbers, childcare workers, dishwashers, firefighters, and pipelayers.

As seen by the results, the industries with the least amount of exposure are those that require a physical actual human, such as a barber or a dishwasher, or that require skills that cannot be recreated by AI, such as interpersonal communication needed to be a sales manager or chief executive.

Also: Trust in ChatGPT is wavering amid plagiarism and security concerns

Interestingly, the study also showed that workers that are in the most exposed industries don't feel that their jobs are at risk, finding that AI is more likely to assist than replace them.

Specifically, 32% of workers in information and technology said AI would help them more than hurt them, despite being in an industry that AI could heavily impact.

Artificial Intelligence

Vast Data Boosts AI Infrastructure with New Unified Data Platform

Vast Data Boosts AI Infrastructure with New Unified Data Platform August 4, 2023 by Jaime Hampton

(Gorodenkoff/Shutterstock)

Vast Data unveiled a new platform at its Build Beyond event earlier this week. The VAST Data Platform is the company’s new offering that unifies storage, database, and virtualized compute engine services, designed for the deep learning era.

The proliferation of large language models has thrust generative AI and deep learning into the spotlight. Vast says this new era of AI-driven discovery has the potential to accelerate solving humanity’s biggest challenges like fighting disease, addressing climate change, and uncovering new fields of science and math.

As enterprises build AI applications for these endeavors, data management has become an essential aspect. Deep learning applications require AI infrastructure that can deliver capabilities like parallel file access, GPU-optimized performance for neural network training and inference on unstructured data, and global access to data from multiple sources including hybrid multi-cloud and edge environments.

“This new data platform was designed for the deep learning era to scale up to levels natural data requires – pictures, genomes, video, sound – and to enable machines to understand and generate insight and discoveries from these vast datasets,” said Vast Data Founder and CEO, Renen Hallak, as he unveiled the new platform at Build Beyond.

In designing the new platform, Vast says it sought to resolve fundamental infrastructure tradeoffs that have previously limited applications from computing and understanding datasets from global infrastructure in real-time. The company considered many types of unstructured and structured data in designing the platform, including data from video, imagery, free text, data streams, and instrument data.

(Source: Vast Data)

To close the gap between event-driven and data-drive architectures, Vast says the VAST Data Platform can access and process data in any private or major public cloud, understand natural data by embedding a queryable semantic layer into the data itself, and continuously and recursively compute data in real time.

When building AI applications, it is necessary to give structure to unstructured data. To address this, Vast has added a native semantic database layer, VAST DataBase. The company says it was designed for rapid data capture and fast queries at any scale and claims it is the first system to break the barriers of real-time analytics, from the event stream to the archive.

The second element in the new platform is the VAST DataEngine, a global function execution engine consolidating datacenters and cloud regions into one computational framework. Vast says the engine supports popular programming languages like SQL and Python and introduces an event notification system along with reproducible model training for managing AI pipelines.

Finally, the third key element is the VAST DataSpace, a global namespace that the company says permits every location to store, retrieve and process data from any location with high performance while enforcing strict consistency across every access point. The DataSpace allows the new platform to be deployed in on-prem and edge environments while also extending it to leading public clouds including Google Cloud, AWS, and Microsoft Azure.

In its "Worldwide AI Spending Guide," IDC predicts global investment in AI-centric systems will continue to grow at double-digit rates to reach a five-year CAGR of 27%, eventually exceeding $308 billion by 2026, according to Ritu Jyoti, group VP of AI and automation research practice at IDC.

“Data is foundational to AI systems, and the success of AI systems depends crucially on the quality of the data, not just their size. With a novel systems architecture that spans a multi-cloud infrastructure, Vast is laying the foundation for machines to collect, process and collaborate on data at a global scale in a unified computing environment — and opening the door to AI-automated discovery that can solve some of humanity's most complex challenges,” Jyoti noted.

Nvidia’s partnership with Vast was highlighted at the event. The new VAST Data Platform is integrated with Nvidia’s DGX AI supercomputing infrastructure, accessible to enterprises building generative AI applications.

(Source: Vast Data)

Manuvir Das, VP of enterprise computing at Nvidia explained how the company has seen accelerated computing evolve, noting that it seems to have come full circle.

“If you think about the evolution of computing, it's been interesting the phases it has gone through. Back in the 2000s, there was a realization that the workloads require more and more data, and so we moved into a model of data-centric computing,” he said.

“And then we had the advent of the cloud,” Das continued. “It was this great place to find compute, but the storage buffers were basically empty, where people started filling up those storage repositories in the cloud. So we actually went back to a model where we were bringing data to the compute again. I think now we've gone full circle where there's enough data in these locations in the clouds that we can think about bringing compute to the data again.”

One user of the VAST Data Platform is the nonprofit research group Allen Institute, which uses it to process the large datasets needed to map neural circuits in its research focused on the brain.

David Feng, director of scientific computing at Allen Institute, said the organization collects a gigantic amount of data with new files growing to hundreds of terabytes within a few days.

“Everything changes about how you need to manage data when it’s that big and that fast,” he said in a statement. “We were excited to work with Vast because of the performance they could offer at this scale, and the system’s multiple protocol support is critical to our entire pipeline. Taking advantage of new advancements in AI will be pivotal to help us make sense of all of this data, and the VAST Data Platform allows us to collect massive amounts of data, so that we can ultimately map as many neural circuits as possible — and its mechanisms for collaboration enable us to rapidly share that data around the world.”

Related

AI bots could soon become your new customer service agent

AI chat support

Artificial intelligence is often seen as a sort of "big bad wolf" of technology, largely because of its potential to disrupt humanity as we know it. One of the biggest theories that trigger apocalyptic fears is the idea that AI will steal the jobs currently performed by humans, triggering mass layoffs and a shift in the economy. That theory isn't too far-fetched.

About 28% of the current jobs can be automated by AI, and companies like IBM have already begun replacing part of their workforce with AI-powered tools. AI can easily perform many jobs in human resources, though not all, and it can also be a helpful tool to automate many mundane tasks, but businesses are also using it for customer service.

Also: Will AI take programming jobs or turn programmers into AI managers?

According to a report by Gartner, worldwide spending on conversational AI technology in customer service centers for 2023 is projected to increase by 16.2% from 2022, totaling $18.6 billion. The investment is evident in the projected growth of conversational AI in customer contact centers by 366% by 2027.

"Longer-term, generative AI and growing maturity of conversational AI will accelerate contact center platform replacement as customer experience (CX) leaders look to simultaneously improve the efficiency of customer service operations and the overall customer experience," according to Megan Marek Fernandez, director analyst at Gartner.

Conversational AI includes chatbots like ChatGPT and virtual assistants like Siri and Alexa. It's a technology that uses machine learning and natural language processing to understand text and speech input and respond accordingly.

Companies are already turning to conversational AI to improve their customer service experience. Zoom announced earlier this year that it would add Anthropic's AI chatbot, Claude, to improve customer interactions. Quiq is a company that specializes in adding conversational AI to customer experience departments across retail and hospitality brands, used by companies like Brink's and Lane Bryant.

Also: Most workers want to use generative AI to advance their careers but don't know how

"Companies are beginning to understand how much more powerful the latest AI is and how it can improve their CX departments. They are turning their attention to the implementation of an AI-based customer service solution in the next few years," said Mike Myer, CEO of Quiq.

Gartner, the research company behind the report, says that the conversational AI market is the fastest-growing segment in the customer service or contact center forecast, driving 24% growth in 2024.

"This means that while many IT investment areas will be weakened as budgets tighten, customer service and support initiatives that have the potential to differentiate the customer experience or streamline customer service operations could receive easier investment 'buy-in," explained Marek Fernandez. "These factors will help contact center as a service (CCaaS) projects receive funding associated with broader corporate digital transformation budgets."

Also: Generative AI is coming for your job. Here are 4 reasons to be excited

Of course, the current economic uncertainty certainly drives at least some choices to shift strategies to AI over humans. Conversational AI like chatbots can be far cheaper than hiring people, after all.

"Companies are simply looking for ways to do more with less. This has accelerated the interest in AI solutions faster than expected. The timing of the market conditions in combination with the revolution of AI technology has gotten the attention of brands across the board," added Myer.

Artificial Intelligence

Pony, investor Toyota partner to ‘mass produce’ robotaxis in China

Pony, investor Toyota partner to ‘mass produce’ robotaxis in China Harri Weber 7 hours

Autonomous-driving company Pony.ai and Toyota say they’re teaming up with the goal of one day cranking out a bunch of “fully driverless robotaxis.”

The two companies intend to kick off their partnership sometime this year with around $139 million in capital from GAC Toyota Motor Co. — a joint venture between Toyota China and GAC, a Chinese state-owned automaker.

The investment follows Toyota’s move to pump about $400 million into Pony back in 2020. Going forward, Toyota says it’ll give Pony an unspecified number of its EVs, while Pony will outfit them with autonomous-driving tech and the firm’s “robotaxi network platform.”

Without context, $139 million may sound like a lot, but Pony has raised more than a billion dollars since its founding in 2016. Things arguably haven’t gone smoothly throughout the self-driving developer’s lifetime.

In 2021, Pony has kicked off driverless-vehicle testing in California only to see its permit suspended six months later. The same year, the company seemed to shrink its autonomous trucking ambitions when it consolidated its R&D teams and shed a couple executives. The next year, Pony recalled its autonomous-driving software and sued two former staffers for allegedly swiping trade secrets when they left to found a startup called Qingtian Truck. Yet, around the same time, Pony claimed to be worth $8.5 billion (and that’s the last we’ve heard of its valuation to date).

Pony isn’t alone in its trials. The entire autonomous vehicle industry, once a darling in the VC world, has gone through a consolidation that has seen numerous startups wither and disappear, particularly in the United States. The few that remain — a small group of well-funded companies that are either publicly traded or owned by large corporations — are starting to scale up commercial operations, albeit slower than perhaps originally forecast.

AI chip startup Tenstorrent lands $100M investment from Hyundai and Samsung

AI chip startup Tenstorrent lands $100M investment from Hyundai and Samsung Kyle Wiggers 8 hours

The appetite for hardware to train AI models is voracious.

AI chips are forecast to account for up to 20% of the $450 billion total semiconductor market by 2025, according to McKinsey. And The Insight Partners projects that sales of AI chips will climb to $83.3 billion in 2027 from $5.7 billion in 2018, a compound annual growth rate 35%. (That’s close to 10 times the forecast growth rate for non-AI chips.)

Case in point, Tenstorrent, the AI hardware startup helmed by engineering luminary Jim Keller, this week announced that it raised $100 million in a convertible note funding round co-led by Hyundai Motor Group and Samsung Catalyst Fund.

Indeed, $50 million of the total came from Hyundai’s two car-making units, Hyundai Motor ($30 million) and Kia ($20 million), which plan to partner with Tenstorrent to jointly develop chips, specifically CPUs and AI co-processors, for future mobility vehicles and robots. Samsung Catalyst and other VC funds, including Fidelity Ventures, Eclipse Ventures, Epiq Capital and Maverick Capital, contributed the remaining $50 million.

Unlike equity, a convertible note is short-term debt that converts to equity upon some predetermined event. Why Tenstorrent opted for debt over equity isn’t entirely clear — nor is the company’s post-money valuation. (Tenstorrent described it as an “up-round” in a release.) Tenstorrent last raised $200 million at a valuation eclipsing $2 billion.

The convertible note tranche, which had participation from Fidelity Ventures, Eclipse Ventures, Epiq Capital, Maverick Capital and more, brings Tenstorrent’s total raised to $334.5 million. Keller says it’ll be put toward product development, the design and development of AI chiplets and Tenstorrent’s machine learning software roadmap.

Toronto-based Tenstorrent sells AI processors and licenses AI software solutions and IP around RISC-V, the open source instruction set architecture used to develop custom processors for a range of applications.

Tenstorrent

A top-down view of Tenstorrent’s custom-designed hardware for AI processing. Image Credits: Tenstorrent

Founded in 2016 by Ivan Hamer (a former embedded engineer at AMD), Ljubisa Bajic (the ex-director of integrated circuit design at AMD) and Milos Trajkovic (previously an AMD firmware design engineer), Tenstorrent early on poured the bulk of its resources into developing its own in-house infrastructure. In 2020, Tenstorrent announced Grayskull, an all-in-one system designed to accelerate AI model training in data centers, public and private clouds, on-premises servers and edge servers, featuring Tenstorrent’s proprietary Tensix cores.

But in the intervening years, perhaps feeling the pressure from incumbents like Nvidia, Tenstorrent shifted its focus to licensing and services and Bajic, once at the helm, slowly transitioned to an advisory role.

In 2021, Tenstorrent launched DevCloud, a cloud-based service that lets developers run AI models without first having to purchase hardware. And, more recently, the company established partnerships with India-based server system builder Bodhi Computing and LG to build Tenstorrent’s products into the former’s servers and the latter’s automotive products and TVs. (As a part of the LG deal, Tenstorrent said it would work with LG to deliver improved video processing in Tenstorrent’s upcoming data center products.)

Tenstorrent — nothing if not ambitious — opened a Tokyo office in March to expand beyond its offices in Toronto as well as Austin and Silicon Valley.

The question is whether it compete against the other heavyweights in the AI chip race.

Google created a processor, the TPU (short for “tensor processing unit”), to train large generative AI systems like PaLM-2 and Imagen. Amazon offers proprietary chips to AWS customers both for training (Trainium) and inferencing (Inferentia). And Microsoft, reportedly, is working with AMD to develop an in-house AI chip called Athena.

Nvidia, meanwhile, briefly became a $1 trillion company this year, riding high on the demand for its GPUs for AI training. (As of Q2 2022, Nvidia retained an 80% share of the discrete GPU market.) GPUs, while not necessarily as capable as custom-designed AI chips, have the ability to perform many computations in parallel, making them well-suited to training the most sophisticated models today.

It’s been a tough environment for startups and even tech giants, unsurprisingly. Last year, AI chipmaker Graphcore, which reportedly had its valuation slashed by $1 billion after a deal with Microsoft fell through, said that it was planning job cuts due to the “extremely challenging” macroeconomic environment. Meanwhile, Habana Labs, the Intel-owned AI chip company, laid off an estimated 10% of its workforce.

Complicating matters is a shortage in the components necessary to build AI chips. Time will tell, as it always does, which vendors come out on top.

KPMG Survey: Momentum for Generative AI Continues to Build in Organizations

A brain with the word AI and symbols representing different businesses.
Image: metamorworks/Adobe Stock

It has been nearly a year since ChatGPT burst onto the scene and excitement about it in organizations continues to grow. A newly released study by KPMG found that 97% of respondents expect their organization to be highly or extremelyhighly impacted by generative AI in the next 12 to 18 months.

Not only that, but generative AI is rated to be the top emerging enterprise technology. 80% believe that it will disrupt their industry and nearly all (93%) think generative AI will provide value to their business.

Jump to:

  • Generative AI investment feve
  • Regulatory concerns are not halting gen AI adoption plans
  • Generative AI will contribute to workforce positivity
  • Survey methodology

Generative AI investment fever

The enthusiasm for generative AI is translating into significant increases in related tech investments. According to the KPMG study, a majority (80%) of respondents said they anticipate increasing their investments in generative AI by more than 50% in the next six months to a year. 45% say it will more than double.

Many of the survey respondents said they will use this increased investment to prioritize infrastructure and scale business through generative AI applications.

SEE: Firm study predicts big spends on generative AI (TechRepublic)

Improving existing business models and leadership demand are the leading drivers behind this increase. Even though respondents report that investments will happen in functional areas across the board, three in four business leaders are looking to leverage generative AI for marketing and sales.

How generative AI has already impacted businesses

Generative AI has already made an impact in areas such as customer service and self-service support, Steve Chase, KPMG U.S. consulting lead, told TechRepublic. “Our clients who have leveraged generative AI to augment knowledge workers in call centers or make enhancements to their virtual chatbots are already seeing a significant return on investment.”

However, the main reason for their success is because of earlier investments made in automation technologies. These allowed for swift implementation of generative AI into their processes and systems, Chase added.

“Another area our clients are getting a lot of value from is generative AI’s ability to automate content generation, such as proposal responses, social media assets, email templates and more,” he said. “It’s also made copywriting and editing tremendously more efficient.”

Survey respondents also cited IT/tech and operations as the other functional areas that are expected to be impacted the most by generative AI.

Regulatory concerns are not halting gen AI adoption plans

While respondents expressed some concern about the generative AI regulation, most businesses do not expect to slow their adoption plans. 35% of respondents said they will not pause adoption and 41% are willing to take a three-to-six-month pause to monitor the regulatory landscape.

Business leaders also anticipate the most regulatory actions around data privacy (51%), security (46%) and transparency (44%). Less is anticipated around copyright (25%) and bias (24%).

Although most (77%) business leaders said that the uncertain and evolving regulatory landscape impacts their investment decisions, they are confident they’ll meet the requirements. In preparation, many are hiring specialized talent (data privacy, bias), creating new roles and/or partnering with external consultants to help navigate the generative AI regulations.

Generative AI will contribute to workforce positivity

84% of respondents believe that generative AI will have a positive impact on their workforce. Half feel generative AI will likely expand their overall headcount. Not surprisingly, they also said hiring preference will go to those who specialize in generative AI.

Additionally, business leaders feel that generative AI is very likely to support workforce initiatives. This includes increased professional development, reduced overtime and more in-person connectivity. SEE: TechRepublic Premium’s prompt engineer hiring kit

To cope with the usage of generative AI, about half of the respondent organizations are upskilling their existing employees either through small- or large-scale efforts.

One of the noteworthy findings from this recent survey is a change in attitude from KPMG’s initial survey on generative AI conducted in April 2023. The survey “revealed that while executives expect generative AI to have an enormous impact on business, many of them felt unprepared for immediate adoption,” observed Chase.

When comparing the findings from April with the latest study from June, he said most organizations (66%) still haven’t moved out of the researching/piloting phase. This is still the case even though a majority of leaders still anticipate significant disruption to their industry.

“For leaders to capitalize on generative AI’s potential for growth and efficiencies, they will need to make critical decisions in the very near future that can move their organization into deployment and ultimately industrialization of the technology,” Chase said. “The clients we work with who have fully embraced generative AI are already seeing a substantial impact on their business and firmly believe it will be a competitive game changer for them in the future.”

Early-adopter organizations will be in a better position to drive enterprise-wide transformation and innovation versus organizations still in early-phase implementation, he added.

Survey methodology

KPMG said the survey was conducted online between June 9, 2023 and June 23, 2023, and reached 200 business decision makers in the U.S. The report also includes data from KPMG’s Gen AI survey conducted in April 2023 among 225 business leaders in the U.S. for comparison purposes. Respondents for this latest survey were from companies with $1 billion or more in annual revenue and included a mix of business functions and industries.

You can see a recap of the survey’s findings in the infographic in Figure A.

Figure A

KPMG's June 2023 generative AI survey highlights. Image: KPMG
KPMG’s June 2023 generative AI survey highlights. Image: KPMG

Subscribe to the Executive Briefing Newsletter

Discover the secrets to IT leadership success with these tips on project management, budgets, and dealing with day-to-day challenges.

Delivered Tuesdays and Thursdays Sign up today

Another major university is supporting generative AI use but with serious guardrails

Person using chat AI on laptop

While some schools have curbed the use of generative artificial intelligence (AI), the University of Hong Kong (HKU) is going all in and urging both its teachers and students to embrace the technology.

The University of Hong Kong is supporting this by giving teachers and students free access to various generative AI tools, including Microsoft Azure OpenAI and OpenAI's ChatGPT and DALL-E.

Also: How to achieve hyper-personalization using generative AI platforms

The University of Hong Kong said it had provided free access to ChatGPT over the past months, having introduced a policy for using generative AI in June. It now will expand the range of options to include other tools starting in the new semester, which kicks off in September.

HKU advocates five key areas of literacy encompassing oral, written, visual, digital communication, and more recently, generative AI.

Alongside free access for teaching and learning purposes, HKU also will provide other resources, including training and online courses to guide the effective and responsible use of AI tools.

Under its generative AI policy, the university's teachers are urged to leverage generative AI to optimize student learning, such as fostering analytical thinking, and producing "creative and engaging" activities as well as content customized for individual students.

Also: 4 ways to detect generative AI hype from reality

Teachers also can use generative AI in assessing students' work, establishing mechanisms to facilitate evaluation "authentically and fairly." "The aim is to ensure the responsible and effective use of generative AI tools and uphold the highest standard of academic integrity," HKU said.

To mitigate risks from applying the technology for work assessment, it noted that teachers must clearly outline expectations and provide guidance on how the use of generative AI tools in coursework and assignments should be properly declared and cited.

Students also will be incentivized to adopt such tools in their work with the use of alternative assessment methods, such as device-free examinations and student peer assessments.

The university said it would carry out periodic evaluations involving teachers, students, and IT administrators to ensure generative AI is effectively integrated and address any new challenges.

Also: Generative AI and the fourth why: Building trust with your customer

"HKU embraces generative AI and recognizes AI literacy as essential to teaching and learning," said Ian Holliday, who led the taskforce that formulated the generative AI policy paper. "Our goal is to enable our teachers and students to become not only AI- literate, but also leaders in exploiting the vast potential of generative AI for the benefit of mankind."

Holliday now chairs the university's advisory committee to oversee the integration of generative AI in teaching and learning activities.

HKU said it was directing funds totaling HK$15.7 million ($2.01 million) from the University Grants Committee's Fund for Innovative Technology-in-Education toward its generative AI initiatives. It also is looking to collaborate with universities from other regions and markets to further explore the potential of generative AI.

In January this year, New York City Schools blocked student and teacher access to ChatGPT on its devices and networks over "negative impacts" on learning as well as concerns about the accuracy of content.

Also: State of IT report: Generative AI will soon go mainstream, say 9 out of 10 IT leaders

Singapore in February said it supported the use of generative AI in schools, but urged students against over-reliance on such tools and to understand the limits of these technologies.

High trust in generative AI, but more work needed

In a global study released this week, 66% of executives also expressed concerns over the potential for bias and disinformation from generative AI. Almost 80% of respondents, though, had high or significant level of trust that generative AI could be tapped for their organization's future products and operations, according to the IDC survey, which was commissioned by Teradata. The study polled 900 senior executives in Asia, Europe, and the US.

Some 86% said more governance was necessary to ensure the quality and integrity of generative AI insights. And while 56% said they were under high or significant pressure to tap generative AI within their organization in the next six months to a year, 42% said they had the skills in place to implement such technologies in the same period.

Another 30% said they were ready to leverage generative AI today, the study found.

Also: This is how generative AI will change the gig economy for the better

A Capgemini Research Institute report last month noted that consumers were enthusiastic about the use of generative AI in their daily activities. Just half, at 51%, said they had explored such tools, with 53% using generative AI to help with their financial planning, noted Capgemini, which based its findings on a survey of 10,000 consumers across 13 markets, including Singapore, Australia, Japan, Germany, France, Sweden, the US, and the UK.

Some 67% of respondents believed they could benefit from using generative AI for medical diagnoses and advice, including 63% who liked the idea of using the technology for more accurate and efficient drug discovery.

The report, though, noted an apparent low awareness of the potential risks. Just 27% were concerned about the use of generative AI, with 49% unfazed about its use in creating fake news. Some 34% were worried about its use in phishing attacks, while 33% were concerned about copyright issues.

Artificial Intelligence

Forget ChatGPT, This New AI Assistant Is Leagues Ahead and Will Change the Way You Work Forever

Forget ChatGPT, This New AI Assistant Is Leagues Ahead and Will Change the Way You Work Forever
Image by Author

I have been using ChatGPT and Bard for quite a while, and these tools have become an integral part of my workflow. I rely on them to generate code, conduct statistical tests, comprehend new terminologies, and produce analytical reports and summary papers. However, my experience improved significantly when I switched to Poe.

In this post, I will explain why I stopped using ChatGPT and why I believe Poe is a superior alternative for various data science tasks.

What is Poe

Poe is a chatbot service that allows you to use state of the art models like Claude +, GPT-3.5-Turbo, GPT-4, LlaMA 2, PaLM, all new LLM models. Moverover, it allows users to create and share customizable chatbot using initial prompt. Poe it fast, easy to use, and provide accurate answers.

Forget ChatGPT, This New AI Assistant Is Leagues Ahead and Will Change the Way You Work Forever
Image from Poe

I prefer using Poe to ChatGPT and Bard because Poe is faster, more accurate, and offers more features. Poe allows me to clear the context with one click, while on ChatGPT I have to start a new chat session. Additionally, Poe lets me switch between models like Claude+ and Sage with a single click, whereas ChatGPT only offers GPT-4 and GPT-3.5-Turbo.

Features of Poe Forget ChatGPT, This New AI Assistant Is Leagues Ahead and Will Change the Way You Work Forever
Image from Poe

Key Features of Poe:

  1. Speed: Poe loads and responds to prompts much faster than ChatGPT.
  2. Range of Models: Poe offers a wider variety of AI models that provide more accurate answers compared to GPT-3.5-Turbo.
  3. Ease of Switching Models: You can easily switch between AI models with a single click.
  4. Custom Chatbots: You can create customizable chatbots based on your preferences by providing an initial prompt.
  5. Community Chatbots: You can access and explore chatbots created by other Poe users for specific tasks.
  6. Simplicity: Signing up and getting started with Poe is simple and straightforward.
  7. Premium Access: A paid subscription gives you access to more advanced models like GPT-4 and Claude.
  8. Apps: Poe is available as both an Android and iOS app.
  9. Stable: It doesn't crash or bug out.
  10. Ease of Use: It is easy to clear the context and start the new chat.

How I use it for Data Science Tasks

You need to understand that these Generative AI chatbots are your assistants. They can help you perform all kinds of tasks from writing an essay to buildinging end-to-end data science projects.

I generally use Poe for writing and creating the structure for my content. If I didn't like the answer of Claude-Instant, I will ask ChatGPT or Google PaLM. This allows me to choose the best repos. Additionally, I use Poe for:

  • Code generation: I use Poe to generate Python, R and SQL code for specific tasks like cleaning data, performing statistical tests, and building machine learning models. It helps debug my code and even generate full code samples to build web apps.
  • Content writing: Poe helps improve the grammar and structure of my blog posts and tutorials. It also summarizes long documents and generates better titles and excerpts for my blogs.
  • Technique learning: I ask Poe to explain new data science techniques and statistical tests, helping me quickly learn new skills.
  • Data analysis and exploration: Poe generates code to clean, explore, analyze and model my data. It helps validate data quality and identify issues.
  • Translation: I leverage Poe to translate code and text.

Overall, Poe acts as a helpful assistant that performs numerous repetitive tasks, freeing up my time for higher-level work and decisions. It has significantly increased my productivity as a data scientist.

Final thoughts

I suggest you all start using it and feel the difference. I have been using Poe Free for three months, and I have no intention of going back to the official ChatGPT or Bard. If I want to generate the response from ChatGPT, I will switch the model within Poe. It is that simple and fast.

If you're interested in learning how I utilize other AI tools to enhance my data science and content creation skills, please let me know in the comments, and I will write about them in the future.
Abid Ali Awan (@1abidaliawan) is a certified data scientist professional who loves building machine learning models. Currently, he is focusing on content creation and writing technical blogs on machine learning and data science technologies. Abid holds a Master's degree in Technology Management and a bachelor's degree in Telecommunication Engineering. His vision is to build an AI product using a graph neural network for students struggling with mental illness.

More On This Topic

  • Stop Doing this on ChatGPT and Get Ahead of the 99% of its Users
  • What Is ChatGPT Doing and Why Does It Work?
  • I Used ChatGPT (Every Day) for 5 Months. Here Are Some Hidden Gems That…
  • ChatGPT as a Python Programming Assistant
  • 5 ChatGPT Features to Boost your Daily Work
  • Forget PIP, Conda, and requirements.txt! Use Poetry Instead And Thank Me…

Trust in ChatGPT is wavering amid plagiarism and security concerns

AI technology, artificial intelligence. Man touching finger on virtual screen with words AI. Artificial intelligence smart robot enters command to create something. AI technology changes the world.

ChatGPT has become a lead productivity tool for many of its users, particularly millennials, according to previous reports. But as the subject of artificial intelligence becomes more popular, the concerns for its security and trustworthiness grow. For ChatGPT, these two factors, along with reliability, have been deemed its biggest weaknesses by users.

The report comes from HundredX, a consumer insights firm that analyzed how ChatGPT user experiences compare to those taken from over 50,000 individual pieces of feedback about more than 70 seasoned productivity tools. These leading productivity tools include DocuSign, Microsoft Office, Zoom, Google Workspace, Adobe, and Slack.

Also: Can AI detectors save us from ChatGPT? I tried 5 online tools to find out

ChatGPT's scorecard wasn't all bad. User satisfaction is above average, though not best in class, earning ChatGPT a Net Promoter Score (NPS) of 30 out of 100. User intent was also pretty positive, as 40% of the early adopters say they plan on using the tool more over the next 12 months, while only 10% say they will use it less, signaling continued growth.

"The key to address consumer's AI concerns is for the brand to implement some form of a data cleansing and screening mechanism for applications," according to Rob Pace, founder and CEO of HundredX. "If ChatGPT can effectively screen out false content, such as produced by bots, then the reliability scores should increase meaningfully. For example, NPR has an NPS that is 35 points higher than the average media competitors in large part driven by several "quality" related scores such as reliability and trust."

Also: How researchers broke ChatGPT and what it could mean for future AI development

ChatGPT's strengths, compared to the industry average from popular productivity tools, are its ease of use, performance, and value. This success led to OpenAI, the company that created the AI chatbot, making plugins and API available for businesses that want to incorporate their technology into their business models.

But when users described the biggest qualms with ChatGPT, three stood out: Reliability, security, and trust. Compared to other productivity software, ChatGPT led a negative sentiment on trust and security, as illustrative comments from different users called it "too easy for people to use for plagiarism" and "my students use this to cheat on assignments."

"On the one hand, embedding ChatGPT into existing models can be a massive opportunity in that it should eliminate several key concerns related to reliability," Pace told ZDNET. "For example, if my Microsoft Office tools can analyze my own data to boost productivity and save time within a walled garden I trust, then several privacy concerns evaporate. That is likely why Microsoft and ChatGPT believe they will be able charge an extra $30 per month per user — a staggering sum of money."

Also: Why your ChatGPT conversations may not be as secure as you think

A study conducted by Cryptomaniaks.com in March warned of the growing distrust of AI chatbots like ChatGPT, explaining that Google searches for "Is ChatGPT safe?" had grown by 614% since the bot's launch in November.

"As AI technology like ChatGPT continues to advance and integrate into our daily lives, it's vital to address the safety concerns that are emerging," according to a CryptoManiaks spokesperson. "This surge in searches highlights the need for greater public education and transparency around AI systems and their potential risks."

But who are the people favoring ChatGPT, distrust, and unreliability aside? According to the HundredX report, over 35% of users that are likely to continue using and promoting ChatGPT are under 40, while only 24% of users are over 40.

"How ChatGPT approaches initial customer feedback will play a huge role in not only how it is perceived but also its impact on society," added Pace.