Browser War is Not Over Yet: Zoho’s Ulaa Challenges Chrome, Safari, Others

Chennai-headquartered technology company Zoho Corporation makes a surprise entry into the web browser space with its new privacy-focussed web browser Ulaa. The USP of blocking tracking and website surveillance, is what Ulaa is marketing on. However, with market dominance of big tech players such as Google and Apple in the browser space, would Ulaa be able to chart a course of its own?

Derived from a Tamil word that means ‘journey’, Ulaa is built as a privacy-driven platform. The web browser will safeguard users from unwanted ads, adjunct surveillance and other intrusive mechanisms that are used by third-party players. In addition to privacy features, the users can synchronize browsing sessions across devices.

But, the question here is, ‘why a browser now?’

In conversation with AIM, Praval Singh, Vice President- Marketing and Customer Experience at Zoho Corporation, said that the browser has been in the works for over three years now.

“We did not build a browser just for the sake of it,” said Praval. Keeping users in mind, the organisation wanted to provide a secure option for browsing that gives users control over their personal information. “Most of the browsers that exist today are being used for data mining for better ad-targeting, as they are being offered by companies dependent on surveillance to feed their ad-revenue model.”

The Ulaa idea sounds noble, but there’s intense competition in the market.

Competing with the Big Guns

As per a report in StatCounter, among web browsers, Google has the highest market dominance of 63.51% followed by Apple’s Safari with 20.43% and Microsoft’s Edge with 4.96%. With such a form of dominance from Google, would Ulaa be able to carve a segment of its own?

He believes that with the fast-expanding digital surface area, the potential rise of threat vectors will rise too. “In India, active internet users are expected to reach 900 million by 2025.” These users are becoming “more aware of the privacy loopholes used by monopolies that pose a threat to their online privacy,” which brings forth the need for a “privacy-centric” browser.

Zoho Corporation considers the product to be well positioned from an infrastructure and scalability standpoint. “This is owing to the fact that to build a comprehensive browser, the depth and breadth of technical expertise required is hard to attain by a standalone browser company.” The company believes that the browser and its functionalities will be enhanced as Zoho owns the entire technology stack (infra) with a diverse product portfolio.

AI in Browser?

With the AI future that Zoho is working on, along with ChatGPT’s growing presence in the Zoho ecosystem, will their new browser be powered by generative AI? Praval Singh has confirmed that though they have not integrated Generative AI into their browser, they are exploring “secure and privacy-preserving ways” of doing so in the future- they will work towards integrating products with AI features in Ulaa.

With the announcement of Ulaa, Zoho also shared its plan of actively working in the generative AI segment. Zoho is majorly advancing towards bringing AI in their products. The company has integrated their home-grown AI engine Zia, with OpenAI, and13 application extensions are now powered by ChatGPT. With increasing ChatGPT adoption, would Zoho be risking their data security?

In an interaction with AIM, Ramprakash Ramamoorthy, Director – AI Research, Zoho Corporation, said that ChatGPT is integrated “on a per account basis.” Businesses pay OpenAI to get a key, which they then enter into their Zoho account. This ensures the data of other Zoho users are safeguarded and kept separate from the ChatGPT integration. “OpenAI also has a legal privacy policy that states that data from paid users will not be used to train their models,” he said.

As an additional safety measure, a privacy card is displayed whenever data flows into OpenAI. This card serves as a precautionary measure, “so users don’t inadvertently share their sensitive data.”

Ramprakash also emphasises that in the long term, they will build their own Large Language Model. “With this, capabilities such as summarisation, text generation, paraphrasing and deep nested conversations will be available natively within the safety of Zoho cloud.”

The post Browser War is Not Over Yet: Zoho’s Ulaa Challenges Chrome, Safari, Others appeared first on Analytics India Magazine.

Maximizing Revenue in Psychology Practices: Leveraging AI for Billing Optimization

AI for Billing Optimization

Automated revenue cycle management (RCM) is becoming an increasingly vital component in the healthcare industry, streamlining and accurately processing complex billing tasks. Using artificial intelligence (AI) and machine learning capabilities will help in ensuring accuracy, minimize human error, and free up personnel to focus on more important tasks. In addition, utilizing AI for RCM will help in the automatic capture of all claims-related data and the efficient identification of any discrepancies or rejected claims.

This streamlined approach eliminates manual claims processing and enables healthcare billing specialists to be more productive with fewer resources. By automating mundane tasks and consolidating hundreds of RCM components into a unified system, artificial intelligence can significantly enhance the overall experience of managing the healthcare revenue cycle.

Understanding the Importance of Revenue Maximization

Revenue optimization is crucial for psychology practice owners. Efficient billing helps you maximize revenue and grow your practice. However, in-house psychology billing and revenue management can be time-consuming and distract you from patient needs. Your practice should prioritize billing optimization.

Exploring the Power of AI in Billing Optimization

AI is changing business, including psychology. MarketsandMarkets predicts the healthcare AI market will grow from $14.6 billion in 2023 to $102.7 billion in 2028. AI’s use in psychology is inevitable given this growth.

Key takeaways on how to leverage AI for streamlining billing processes in psychology practices

Healthcare providers can streamline the billing process in psychology practices with AI, from automated coding to advanced analytics. AI can improve revenue cycle management with a patient-centric system that provides personalized care and efficiency. AI can accelerate claims processing, prevent denial and collection losses, reduce manual labor costs, and improve patient satisfaction for healthcare billing professionals. AI may help a practice save time and improve accuracy. Focused data collection, system integration, and team communication of actionable insights are key AI takeaways. AI can streamline workflows and boost profits when implemented with a clinic-specific plan.

AI-Enabled Appointment Scheduling and Reminders

Scheduling and reminders are major challenges in psychology practice management. AI-powered appointment scheduling can improve efficiency and reduce no-shows. Intelligent scheduling uses AI to detect scheduling conflicts, which can be difficult to spot manually. AI-enabled appointment reminders can automatically notify patients, saving practice staff time.

Famous psychologists use AI for billing. After using AI-powered appointment scheduling, Dr. Emily Blake’s practice improved efficiency and patient satisfaction. AI has reduced her practice’s patient no-show rate by 30%.

Automated Documentation and Coding

Psychology billing and revenue management require documentation and coding. AI-powered documentation and coding can save time and improve accuracy. AI can automatically translate clinical notes into billing codes using NLP. This eliminates manual coding, which is error-prone and time-consuming.

AI-powered documentation and coding have shown real-world benefits. In a recent study, a machine learning model extracted 95% of clinical concepts from physician notes.

Intelligent Claims Processing

Psychology practice staff find claims submission and reimbursement tedious. AI-powered claims processing can streamline and speed up reimbursement. AI technology can detect claims errors and discrepancies, reducing denials. AI also submits claims and tracks reimbursements, saving staff time and money.

AI-powered claims processing benefits psychologists. Since integrating AI technology into his practice’s claims processing, Dr. John Smith’s team’s administrative workload has decreased by 50% and revenue has increased by 20%.

Addressing Concerns and Risks in AI-Driven psychology billing Optimization

AI has risks like any new technology. AI-driven billing optimization risks data security, privacy, and HIPAA compliance. Psychologists must address these concerns and mitigate risks.

Psychology practices should carefully select AI vendors with strong security measures to protect data. Patient data must be encrypted, stored securely, and restricted. To stay ahead of threats, practices should establish data breach response protocols and regularly update their security systems.

AI-driven billing optimization requires regulatory compliance. Psychology practices should ensure their AI technology meets relevant regulations, such as HIPAA in the US. The AI vendor’s compliance measures must be thoroughly reviewed and certified.

Psychology practices can confidently adopt AI technology for billing optimization while protecting patient data and complying by addressing concerns and risks.

The benefits of AI for improving efficiency and accuracy of financial operations in psychology practices

AI can automate revenue cycle management and psychology billing. Doctors are increasingly using AI to improve financial operations and save healthcare providers time and money.

AI-driven automated RCM platforms can improve medical claims filing speed and accuracy, enhance patient satisfaction by enabling swift reimbursement, support accurate coding, flag potential discrepancies that require human intervention, ensure regulatory compliance, produce automated documentation, and help organizations detect and mitigate fraud risks. Healthcare organizations can reduce costs and improve reimbursement by automating revenue cycle processes with AI.

How AI can be used to cut reduce administrative costs associated with processes like billing, collections, and payment processing?

Revenue cycle management systems powered by AI can reduce administrative expenses. AI can be utilized by healthcare billing professionals for data entry, account reconciliation, patient eligibility, and prior authorization, among other tasks. It automates billing, monitoring of accounts receivable, document matching, and payment reminders. AI helps administrators track financial health and focus on productivity by reducing the amount of manual labor needed to manage the revenue cycle. AI can improve healthcare billing and administration.

Conclusion:

For long-term success, psychology practices must optimize billing processes to maximize revenue. Psychology practices can improve efficiency, billing, and revenue by using AI technology. AI-enabled appointment scheduling, automated documentation and coding, and intelligent claims processing save time, reduce errors and speed up reimbursement.

Data security and compliance are important AI implementation concerns. Psychology practices can protect patient data by choosing reputable AI vendors, implementing strong security measures, and complying with relevant regulations.

AI Career Notes: May 2023 Edition

AI Career Notes: May 2023 Edition May 8, 2023 by Mariana Iriarte

In this monthly feature, we bring you up to date on the latest career developments in the enterprise AI community – promotions, new hires and accolades. Here's the place to read about the movers and shakers, your colleagues, your friends, and maybe yourself.

Asanka Abeysinghe

WSO2, a provider of digital transformation technology solutions, promoted Asanka Abeysinghe as its chief technology officer. Asanka, who has held several technology leadership roles at WSO2 since 2008, most recently served as the company’s chief technology evangelist.

“At WSO2, we’re on a mission to simplify the creation of digital experiences for our customers as we expand our new generation of SaaS offerings and open-source software to help software development teams innovate faster,” said Abeysinghe. “Just as we advise enterprises on delivering customer-centric digital applications, as CTO, my foremost priorities will include educating the market, channeling user feedback to product engineering, and delivering strategic advisory services to empower our customers with the essential foundation for success.”

Troy Anderson, Rahim Bhatia, Dan McAllister, Jessica Soisson, and Greg Wolfe

Boomi, a global software as a service (SaaS) company, appointed Troy Anderson as its global commercial market vice president. Anderson most recently led data analytics sales at Google Cloud for North America. He has also held senior leadership roles at Qlik, SAP, Crystal Decisions, and Business Object.

In addition, Boomi appointed Rahim Bhatia as its chief strategy officer. Bhatia will help further develop and execute the company’s strategy for driving organizational growth and customer success. He brings more than 20 years of experience delivering scalable customer and revenue growth for technology businesses at companies such as SAP, CA Technologies, and Axway.

Dan McAllister joined Boomi as its senior vice president of global alliances and channels. McAllister will focus on further building the company’s global partner ecosystem and developing win-win partner enablement programs. McAllister previously led global teams and achieved industry-leading growth at Salesforce, MuleSoft, Box, NetSuite, Crystal Decisions, and SAP.

Boomi appointed Jessica Soisson as its chief accounting officer. Soisson will be responsible for overseeing all accounting matters and functions for the company. Before joining Boomi, she was previously the CAO and senior vice president and corporate controller at Citrix Systems.

Lastly, Boomi appointed Greg Wolfe its chief commercial officer. Wolfe will oversee the company’s global strategy, driving growth and exceptional customer experience across all business units and geographies. Prior to joining Boomi, he held C-level and executive roles at technology companies, including Adobe, SAP, Business Objects, Marketo, Crystal Decisions, and Xerox Corporation.

Jon Bakke

ScyllaDB, the database for data-intensive apps that require high throughput and predictable low latency, appointed Jon Bakke as its chief revenue officer. Bakke brings over 20 years of experience to ScyllaDB. Most recently, he served as the CRO of MariaDB. He’s also served in leadership roles at Oracle and MarkLogic.

“I’ve been watching ScyllaDB gain increasing traction as the go-to NoSQL database for demanding use cases,” Bakke said. “And with the insatiable demand for AI/ML-driven personalization, real-time virtual interactions, IoT device use, extensive fraud and threat detection, and so on … the market is clearly moving right into ScyllaDB’s sweet spot. It’s fundamentally architected to excel with what’s next.”

Nathaniel Crook

Amplitude, Inc., a digital analytics platform, appointed Nathaniel Crook as the company’s next chief revenue officer. With over 20 years of enterprise software sales and engineering experience, Crook will lead Amplitude’s sales and partner organizations globally.

"As organizations increasingly rely on technology to drive strategy and fuel growth, companies across all industries and segments need to build digital platforms and experiences that their customers love," said Crook. "As an essential part of the modern technology stack, our market is enormous and continues to rapidly grow. With a best-in-class product suite and a world-class team, I’m thrilled to join Amplitude and excited about our future."

Don Doerner, Turguy Goker, and David Turek

The DNA Data Storage Alliance, a SNIA Technology affiliate, announced the appointments of CATALOG Technologies and Quantum Corporation to the governing board of the Alliance. Don Doerner and Turguy Goker will represent Quantum, and David Turek will represent CATALOG.

Doerner joined Quantum in 2006 and is presently in the office of the CTO, responsible for technology strategy, vision, and leadership of Quantum products. Goker represents Quantum with LTO organizations and leads the Advanced Development Team focused on LTO technology. Turek joined CATALOG from IBM where he held numerous executive positions in high-performance computing and emerging technologies.

Richard Halkett and Danner Stodolsky

SambaNova Systems appointed Richard Halkett as its chief revenue officer. Halkett will be responsible for leading the sales, sales operations, revenue operations, customer engineering, and marketing teams. Halkett previously spent almost six years at Amazon Web Services as the managing director and WW Lead for Innovation & Transformation Programs.

In addition, SambaNova Systems appointed Danner Stodolsky as its senior vice president of cloud. Stodolsky will focus on SambaNova’s cloud strategy. He joined SambaNova from Google, where he spent over 11 years serving in various roles including as vice president of engineering for YouTube, Google Cloud Platform, and Ads Privacy.

Jack Huynh

AMD promoted Jack Huynh to the role of senior vice president and general manager of computing and graphics. Huynh has been at AMD for more than 24 years and most recently served as senior vice president (SVP) and general manager (GM) for the AMD Semi-Custom business group.

As SVP and GM, he was responsible for leading strategy, business management, and engineering execution for high performance custom solutions. Prior to that, Huynh served as corporate VP and GM, where he led the business execution of mobility solutions for the AMD Client PC business group.

Dinakar Hituvalli

Deltek, the provider of software and solutions for project-based businesses, appointed Dinakar Hituvalli as its chief technology officer. Hituvalli will be responsible for managing Deltek’s engineering and cloud operations teams. Dinakar previously spent 25 years at Oracle, most recently as group vice president of product development.

“I’m very excited to join Deltek to help drive the technology strategy forward and enhance our innovative product roadmap,” said Hituvalli. “As companies look for ways to stay competitive in today’s market, it is critical they have the right technology partners to help them evolve their business. I look forward to fostering a strong engineering culture and working with our talented team to help Deltek continue to innovate at a rapid pace, and in turn, deliver exceptional service to our customers.”

Helen Johnson

Appen Limited appointed Helen Johnson as its chief financial officer. Johnson brings with her a wealth of experience of more than 25 years leading finance organizations across a variety of industries, including privately held IT Managed Service providers, medium-sized public healthcare and financial services companies, and Fortune 500-ranked publicly traded corporations in the IT industry.

“I have a passion for building teams and delivering results,” Johnson said. “I am honored to join Armughan and the team to realize the strategic opportunity ahead for Appen.”

Andrew Joiner

Hyperscience, a provider of enterprise artificial intelligence solutions, appointed Andrew Joiner as its chief executive officer. Joiner joined Hyperscience from InMoment, where he held the role CEO.

“I am thrilled to join Hyperscience and lead this exceptional team with its ground-breaking product and approach,” said Joiner. “Hyperscience is at the forefront of the intelligent automation revolution and our platform offers a unique combination of AI and human-in-the-loop capabilities that can streamline the most complex, high-volume document processes—and so much more. Today, the platform’s proven accuracy and efficiency are helping businesses reduce costs, improve customer satisfaction, and increase productivity. I am thrilled to partner with the team to build on this success and drive the company to even greater capability.”

Raghunath Koduvayur and Sylwia Barthel de Weydenthal

IQM Quantum Computers appointed Raghunath Koduvayur as the head of the newly created Asia-Pacific business unit. In addition, Sylwia Barthel de Weydenthal was named as the head of marketing and communications for IQM.

“Our presence in Asia aligns with our commitment to building world-leading quantum computers for the well-being of humankind, now and for the future, and we are confident this new office will be instrumental in helping drive the development of the quantum community in Singapore and the region,” Dr. Jan Goetz, CEO and Co-founder of IQM Quantum Computers, said. “In addition, we will tap into the incredible local talent, and we are also excited about bringing our technical track record and world-class expertise to the region, and our regional team will play a crucial role in broadening our global development. We look forward to partnering with important players in the value-chain as we continue to push the boundaries for the ecosystem.”

Ron Longo

Fortanix Inc., the multi-cloud data security company, appointed Ron Longo as its chief revenue officer. Longo will be responsible for overseeing the company’s global sales team. He most recently held the role of vice president of worldwide SDWAN/SASE sales at VMware.

“I have had the pleasure of working with some of the most innovative businesses in tech over the course of my career, but it could be argued that none have been as uniquely positioned in the market as Fortanix is right now,” Longo said. “Fortanix has proven to be a true pioneer in the data security landscape at a time when it has never been more vital, and I’m looking forward to building on its impressive growth trajectory. I couldn’t be more thrilled to get started.”

Marco Merkel

Do IT Now Germany, a joint venture between Do IT Systems (Italy) and HPCNow! (Spain), appointed Marco Merkel, a former sales executive at ThinkParQ, as the company’s chief executive officer.

“At Do IT Now, our mission is to empower businesses to fully harness their potential by delivering exceptional services and support,” said Merkel. “I am excited to join this innovative and progressive team, and I eagerly anticipate introducing our one-of-a-kind, dynamic approach to the industry in northern Europe.”

Tony Owens

SnapLogic appointed Tony Owens, former President at Salesforce and senior executive at Oracle, to its board of directors. Owens will work closely with company leadership to guide strategy and identify new growth and sales opportunities.

“It's a privilege to join SnapLogic's board of directors as the company enters its next stage of growth,” said Owens. “SnapLogic has already established itself as a leader in enterprise integration and automation with proven success across major verticals. I look forward to working with the team to continue to drive innovation, expand the global footprint, and reach even more customers to help propel their business success."

Deepak Patil

Intel promoted Deepak Patil to the role of corporate vice president and general manager of Intel’s Accelerated Computing Systems and Graphics group. For the past year, Patil has been serving as Intel’s chief technology and strategy officer in its Data Center and AI group.zPrior to Intel,

Patel was the senior vice president of APEX Engineering for Dell Technologies. He has also held leadership positions at Virtustream, Oracle, and Microsoft.

Read more coverage on HPCwire here.

Mark Pundsack

Styra, Inc., the creators and maintainers of Open Policy Agent (OPA) and leader of cloud-native authorization, appointed Mark Pundsack as its chief executive officer. Pundsack brings more than 30 years of experience to the role with deep expertise in the software development industry, where he has spent much of his career leading product development teams and forging a path for developer experience.

“I’m honored to join Styra at such an exciting time,” said Pundsack. “The addition of Styra Load allows enterprises to scale their policy management through performance, efficiency, and time-to-market improvements – taking open source OPA to the next level.”

Rita Selvaggi

BackBox, a network automation company, appointed Rita Selvaggi its board of directors. Selvaggi most recently served as chief executive officer of ActivTrak and as the chief marketing officer of AlienVault, a security software company acquired by AT&T. She has also led marketing for SolarWinds leading up to and through an IPO in 2009.

“BackBox may be the best-kept secret for automating network backups, single-click recovery, OS upgrades, and health checks,” said Selvaggi. “Customers love BackBox because it’s easy to implement and use, scalable, reliable, and trustworthy. Automation of critical network tasks is a must-have as hybrid networks add complexity and network operations teams are resource-constrained. I’m excited to work with the BackBox team to get the word out on their market-leading solution to help more companies upgrade and achieve their network automation goals.”

Haiyan Song

NetApp, a cloud-led, data-centric software company, appointed Haiyan Song as its executive vice president and general manager of its cloud operations business unit. Song most recently served as executive vice president and general manager of F5’s security and distributed cloud business unit.

“I am excited to join NetApp and lead the company’s efforts to be a strategic and operational partner to customers on their journey to the cloud and in the cloud,” said Song. “Cloud is a key enabler for the new digital world powered by data and AI, but organizations of all sizes are still grappling with the complexities of developing and operating in multi-cloud environments. I look forward to working with the talented team at NetApp to deliver innovative solutions to help customers better manage this complexity and unlock the full potential of the cloud.”

Magnus Tagtstrom

Iterate.ai, a low-code enterprise development platform developer, appointed Magnus Tagtstrom as its corporate vice president of emerging tech and general manager for Europe. Tagtstrom brings decades of experience to Iterate.ai leading digital innovation projects and optimizing critical business processes, with a particular focus on AI technologies.

“I’ve been fortunate to see—first-hand, at Alimentation Couche-Tard—how Iterate’s low-code code platform can turn a digital innovation dream into a reality, and do so at scale,” said Tagtstrom. “Low-code and AI are quickly becoming use-or-be-left-behind strategies for businesses across industries. By continuing to quickly and securely incorporate the most impactful new technologies into its Interplay platform, Iterate is playing a pivotal role in ensuring its customers can stay ahead of competitors and deliver truly unique digital experiences. I’m thrilled to be joining Iterate for its next stage of growth.”

Phil Taylor

End-to-end immersion cooling solutions provider Submer appointed Phil Taylor as its vice president of sales data center. Taylor will lead Submer’s sales teams and be responsible for strategic account planning.

“The data center market is at a point in time where changes need to happen,” Tayler said. “The lack of power and the increased need for sustainable, energy-efficient products mean Submer is in pole position, and I am delighted to be working at such a forward-thinking company.”

Min Wang

Splunk Inc., the cybersecurity and observability leader, appointed Min Wang as its chief technology officer. Most recently, she spent more than five years at Google, where she led a team responsible for critical components of the company’s AI-driven Google Assistant. Prior to Google, Wang served as the SVP of Visa Research. She has also held research leadership roles for Google Research, HP Labs, and IBM Research.

“I am excited to join Splunk at a time when advancements in AI present a tremendous opportunity to transform our security and observability solutions,” said Wang. “By better leveraging AI technologies like machine learning and natural language processing, Splunk can extract deeper insights, provide more precise predictive analytics and streamline data analysis processes to foster more informed decision-making for our customers. I look forward to partnering with Splunk’s leaders and teams to help ensure our customers’ mission-critical systems remain secure, reliable and resilient.”

To read last month's edition of Career Notes, click here.

Do you know someone that should be included in next month's list? If so, send us an email at [email protected]. We look forward to hearing from you.

Related

Managing Model Drift in Production with MLOps

Machine learning models are powerful tools that could help businesses make more informed decisions and optimize their operations. However, as these models are deployed and run in production, they are subject to a phenomenon known as model drift.

Model drift occurs when the performance of a machine learning model degrades over time due to changes in the underlying data, leading to inaccurate predictions and potentially significant consequences for a business. To address this challenge, organizations are turning to MLOps, a set of practices and tools that help manage the lifecycle of production machine learning.

In this article, we'll explore model drift, the different types of it, how to detect it, and most importantly, how to handle it in production using MLOps. By understanding and managing model drift, businesses can ensure that their machine learning models remain accurate and effective over time, delivering the insights and outcomes that they need to thrive.

Managing Model Drift in Production with MLOps
Photo by Nicolas Peyrol on Unsplash
What is Model Drift?

Model drift, also known as model decay, is a phenomenon in machine learning in which the model performance decreases over time. This means that the model will gradually start to give bad predicitions that will decrease the accuracy over time.

There are different reasons for model shifting such as changes in data collection or the underlying relationships between variables. Therefore the model will fail to catch these changes and the performance will decrease as the changes increase.

Detecting and addressing model drift is one of the essential tasks that MLOps solve. Techniques such as model monitoring are used to detect the presence of model drift and model retraining is one of the main techniques used to overcome model drift.

Types of Drift

Understanding the type of model drift is essential to update the model based on the changes that occurred in the data. There are three main types of drift:

Concept Drift

Concept drift occurs when the relationship between the target and the input changes. Therefore the machine learning algorithm will not provide an accurate prediction. There are four main types of concept drift:

  • Sudden Drift: A sudden concept drift occurs if the relationship between the independent and dependent variables occurs suddenly. A very famous example is the sudden occurrence of the covid 19 pandemic. The occurrence of the pandemic has suddenly changed the relationship between the target variable and the features in different fields so a predictive model trained on pre-trained data will not be able to predict during the pandemic time accurately.
  • Gradual Drift: In a gradual concept drift, the relation between the input and the target may change slowly and subtly. This can result in a slow decline in the performance of a machine learning model, as the model becomes less accurate over time. An example of the gradual concept drift is fraudulent behavior. Fraudsters tend to understand how the fraud detection system works and change their behavior over time to escape the system. Therefore a machine learning model trained on historical fraudulent transaction data will not accurately predict the gradual changes in the fraudster's behavior. For example, consider a machine learning model used for predicting stock prices in which the model is trained on data from the past five years and its performance is evaluated on new data from the current year. However, as time goes by, the market dynamics may change, and the relationship between the variables that influence stock prices may evolve gradually. This can result in incremental drift, where the model's accuracy gradually deteriorates over time as it becomes less effective at capturing the changing relationship between the variables.
  • Incremental Drift: Incremental drift occurs when the relationship between the target variable and the input changes gradually over time which occurs usually due to changes in the data generating process.
  • Recurring Drift: This is also known as seasonality. A typical example is the increase in sales during Christmas or Black Friday. A machine learning model that will not inaccurate these seasonal changes into account will end up providing inaccurate predictions for these seasonal changes.

These four types of concept drift are shown in the figure below.

Managing Model Drift in Production with MLOps
Types of concept drift | Image from Learning under Concept Drift: A Review.

Data Drift

Data drift occurs when the statistical properties of the input data change. An example of this is the change in the age distribution of the user of a certain application over time, therefore a model trained on a specific age distribution that is used for marketing strategies will have to be changed as the change in the age will affect the marketing strategies.

Upstream Data Changes

The third type of drift is the upstream data changes. This refers to the operational data changes in the data pipeline. A typical example of this is when a specific feature is no longer generated resulting in a missing value. Another example is a change in the unit of measurement for example if a certain sensor measure quantity in Celsius and then changes into Fahrenheit.

Detecting Model Drift

Detecting model drift is not straightforward and there is no universal method to detect it. However, we will discuss some of the popular methods to detect it:

  • The Kolmogorov-Smirnov test (K-S test): The K-S test is a nonparametric test to detect the change in the data distribution. It is used to compare the training data and the post-training data and find the distribution changes between them. The null hypothesis for this test set states that the distribution from the two datasets is the same so if the null hypothesis is rejected, therefore there will be a model shift.
  • The Population Stability Index (PSI): PSI is a statistical measure that is used to measure the similarity in the distribution of categorical variables in two different datasets. Therefore it can be used to measure the changes in the characteristics of categorical variables in the training and post-training dataset.
  • Page-Hinkley Method: The Page-Hinkely is also a statistical method that is used to observe changes in the mean of data over time. It is usually used to detect the small changes in the mean that is not apparent when looking at the data.
  • Performance Monitoring: One of the most important methods to detect the concept shift is monitoring the performance of the machine learning model in production and observing its change and if it crosses a certain threshold we can trigger a certain action to correct this concept shift.

Handling Drift in Production Managing Model Drift in Production with MLOps
Handling Drift in Production | Image by ijeab on Freepik.

Finally, let's see how to handle the detected model drift in production. There is a wide spectrum of strategies used to handle the model drift depending on the type of drift, the data we are working on, and the project in production. Here is a summary of the popular methods that are used to handle model drift in production:

  • Online Learning: Since most of the real-world applications run on streaming data, online learning is one of the common methods that are used to handle the drift. In online learning the model is updated on the fly as the model deal with one sample at a time.
  • Periodically Model Re-train: Once the model performance falls below a certain threshold or a data shift is observed a trigger can be set to retrain the model with recent data.
  • Periodically Re-train on a Representative Subsample: A more effective way to handle concept drift is by selecting a representative subsample of the population and labeling them using human experts and retraining the model on them.
  • Feature Dropping: This is a simple but effective method that can be used to handle concept drift. Using this method we will train multiple models each using one feature and for each model, the AUC-ROC response is then monitored, and if the value of the AUC-ROC went beyond a certain threshold using a particular feature then we can drop it as this might participate in drifting.

References

  • Best Practices for Dealing With Concept Drift
  • Understanding Data Drift and Model Drift: Drift Detection in Python
  • Machine Learning Concept Drift?—?What is it and Five Steps to Deal With it

In this article, we discussed model drift, which is the phenomenon in machine learning where the performance of a model deteriorates over time due to changes in underlying data. Businesses are turning to MLOps, a set of practices and tools that manage the lifecycle of machine learning models in production, to overcome these challenges.

We outlined the different types of drift that can occur, including concept drift, data drift, and upstream data changes, and how to detect model drift using methods such as the Kolmogorov-Smirnov test, Population Stability Index, and Page-Hinkley method. Finally, we discussed the popular techniques to handle model drift in production including online learning, periodic model re-train, periodically re-train on a representative subsample, and feature dropping.
Youssef Rafaat is a computer vision researcher & data scientist. His research focuses on developing real-time computer vision algorithms for healthcare applications. He also worked as a data scientist for more than 3 years in the marketing, finance, and healthcare domain.

More On This Topic

  • Detecting Data Drift for Ensuring Production ML Model Quality Using Eurybia
  • How to Detect and Overcome Model Drift in MLOps
  • Production Machine Learning Monitoring: Outliers, Drift, Explainers &…
  • How to break a model in 20 days — a tutorial on production model analytics
  • Model Drift in Machine Learning — How To Handle It In Big Data
  • MLOps: Model Monitoring 101

To Control AI or Be Controlled

The battle lines in AI research have been drawn clearly. There are factions that believe AI will end humanity as we know it, led by notable researcher and AI doomsdayer Eliezer Yudkowsky. Some who are newly reformed and critical of the direction in which AI progress is headed like Geoffrey Hinton, Godfather of Deep Learning, who resigned from Google Brain a couple of days back. But what we do know, without a doubt, is that AI even in its current half-baked state is capable of controlling us.

Can LLMs be controlled?

Which brings us to the question of whether something that’s not quite as smart as us can in fact up control us? According to a Geoffrey Hinton quote, this happens more often than we fully realise. Political leaders, managers we report to, gurus we pray to and not to mention our cats who have us running circles around them – are all not necessarily smarter than us.

Meta AI’s Chief Scientist Yann LeCun sees no problem with this. Just yesterday, LeCun tweeted, “We can design AI systems to be both super-intelligent *and* submissive to humans. I always wonder why people just assume that intelligent entities will necessarily want to dominate.

That’s just plain false, even within the human species.”

LeCun thinks that for machines to be in control, they should “want to take control” and our instant assumption that they will obviously dominate humans is purely drawn from science fiction dreams.

LeCun isn’t going against the grain here. AI researcher and inventor of the Markov Logic Network in ML, Pedro Domingos also tweeted saying, “You’re already being manipulated every day by people who aren’t even as smart as you, but somehow you’re still OK. So why the big worry about AI in particular?”

Domingos and LeCun both rest easy on the logic that LLMs for a fact do not have “agency” like humans. More than anything, it looks like LeCun is trying to put a stop to AI fear mongering, repeating that superhuman AI systems were still somewhat at a distance from us. “Gods and superhuman AI systems have a few things in common: They are invented by people. People fear they may run the world. People fight about what it all means. They don’t actually exist,” he tweeted.

But none of this can refute the fact that modern AI models are normally built in a way that intent may just elude them. Deep neural networks – that most ML is based on – are able to absorb huge amounts of data and process it, have a black box which makes the internalised process pretty much invisible even to their makers.

How to control an AI system

Nick Bostrom’s ‘Superintelligence’ has also discussed the mechanisms to solve AI’s control problem at length. Bostrom stated that containing AI to control it might also mean that we have to eventually forgo its benefits. He then went on to show instances of how even well-intentioned methods of using AI could very easily backfire.

Say, a superintelligence that was given the task of ‘maximising happiness in the world,’ might find the most efficient way to do this by simply destroying all life on earth and generate faster computerised simulations of happy thoughts. Bostrom theorised that even with very little communication it wasn’t a full guarantee that superintelligence could be completely safe.

A JAIR or Journal of Artificial Intelligence Research study titled, ‘Superintelligence cannot be contained: Lessons from Computability Theory’ by Google engineer Lorenzo Coviello and University of Melbourne professor Andres Abeliuk among others, stated explicitly that “containment (of AI) in principle, is impossible, due to fundamental limits inherent to computing itself.”

And if LLMs are too limited to warrant these fears, it could be argued that AI is already improving itself. Last year, a paper titled, ‘Self-Programming Artificial Intelligence using Code-generating Language Models’ showed how researchers could programme a model capable of autonomously editing its own source code to become better. ChatGPT, too, can not only fix bugs in its code but also explain why it was doing so.

Maybe we should all turn to look at Hinton himself, the man who was practically responsible for the biggest leap in deep learning, who recently tweeted saying, “If we did make something MUCH smarter than us, what is your plan for making sure it doesn’t manipulate us into giving it control?”

Hinton is right, there is no plan in place. And none have been more open about how clueless they are than OpenAI chief Sam Altman. The maker of the GPT models has recently come out and stated that the consequences of AI were a toss up and could either be “terrifying or awesome.” What if things went south? “It’s lights-out for all of us,” he responded.

The post To Control AI or Be Controlled appeared first on Analytics India Magazine.

Exploratory Data Analysis Techniques for Unstructured Data

Exploratory Data Analysis Techniques for Unstructured Data
Image by Author

Exploratory Data analysis is one of the crucial phases of the Machine learning development life cycle while working on any real-life data analysis project, which took almost 50-60% of the time of the whole project as the data we have to used to find insights is the raw data which has to be processed before applying Machine learning algorithms to get the best performance. This step has to include the following things:

  1. It involves better analyzing and summarizing data sets to understand their underlying patterns, relationships, and trends.
  2. It allows analysts to identify essential data features, detect anomalies or outliers, and determine the most appropriate modeling techniques for predicting future outcomes.

Let's understand the significance of EDA in Data Analytics with a story.

Understand the Importance of EDA with a Story

Once upon a time, a small firm had just started its business in the market. This firm had a group of professionals who were passionate about their role and worked in a way so that the overall firm would profit. As the firm started growing in terms of employees or users about the product it was promoting, the management team realized that they needed help understanding the need and behavior of users or customers towards the product or services the firm was offering.

To overcome this issue, they started hiring some tech professionals. Eventually, they were able to find one tech guy under the profile of Data Analyst so that they could better understand the customer data. That analyst would be able to find important information or insights from it. The analyst they hired had good hands-on experience in the same type of technology or projects where they mainly worked on exploratory data analysis.

So, for this problem, they started collecting data from multiple APIs through web scraping in an ethical manner, which includes the company website, social media handles, forums, etc. After data collection, they started with cleaning and processing the data so that they would be able to find some insights from that data. They used statistical techniques such as hypothesis testing and business intelligence tools to explore the data and uncover the hidden patterns using pattern recognition techniques.

After creating the pipeline, they observed that the company's customers were most interested in buying eco-friendly and sustainable products. The company's management launched eco-friendly and sustainable products based on these insights. So, based on these updates, the new products were liked by the customers, and eventually, the company's revenue started multiplying. Management has started realizing the importance of exploratory data analysis and hired more data analysts.

Therefore, In this article, inspired by the story mentioned above, we will understand different techniques inside the exploratory data analysis phase of the pipeline and use popular tools in this process, through which you can find million-dollar insights for your company. This article provides a comprehensive overview of EDA and its importance in data science for beginners and experienced data analysts.

Different Techniques to Implement

To understand each technique used inside EDA, we will go through one dataset and implement it using Python libraries for Data Science, such as NumPy, Pandas, Matplotlib, etc.

The dataset we will use in our analysis is Titanic Dataset, which can be downloaded from here. We will use train.csv for model training.

1. Import Necessary Libraries and Dependencies

Before implementing, let’s first import the required libraries that we are going to utilize to implement different EDA techniques, including

  1. NumPy for matrix manipulation,
  2. Pandas for data analysis, and
  3. Matplotlib and Seaborn for Data Visualization.
import numpy as np  import pandas as pd  import matplotlib.pyplot as plt  import seaborn as sb

2. Load and Analyze the Dataset

After importing all the required libraries, we will load the Titanic dataset using the Pandas dataframe. Then we can start performing different Data preprocessing techniques to prepare the data for further modeling and generalization.

passenger_data = pd.read_csv('titanic.csv')  passenger_data.head(5)

Output:

Exploratory Data Analysis Techniques for Unstructured Data
Fig. 1 | Image by Author

3. Get Statistical Summary

The following analysis provides us with the statistics of all the numerical columns in the data. The statistics which we can obtain from this function are:

  1. Count,
  2. Mean, and Median
  3. Standard Deviation
  4. Minimum and Maximum Values
  5. Different Quartiles Values
passenger_data.describe()

Output:

Exploratory Data Analysis Techniques for Unstructured Data
Fig. 2 | Image by Author

By interpreting the above output, we can see that there are 891 passengers with an average survival rate of 38%. The minimum and maximum value of the age columns lies between 0.42 to 80, and the average age is approximately 30 years. Also, a minimum of 50% of the passengers don't have siblings/spouses, and a minimum of 75% don't have parents/children, and the fare column varies a lot in terms of values.

Let's try to compute the survival rate by writing the code from scratch.

4. Compute the Overall Survival Rate of Passengers

To compute the overall survival rate, we first select the 'Survived' column, check the rows for which the value is one, and then count all those rows. Finally, to find the percentage, we will divide it by the total number of rows and print it.

survived_data = passenger_data[passenger_data['Survived'] == 1]  survived = survived_data.count().values[1]  survival_percent = (survived/891) * 100  print('The percentage of survived people in training data are {}'.format(survival_percent))

Output:

The percentage of survived people in training data are 38.38383838383838

5. Compute the Survival Rate by Gender and the ‘Pclass’ Column

Now, we have to find the survival rate with one of the aggregation operators wrt different columns, and we are going to use the 'gender' and 'Pclass' columns and then apply the mean function to find it and then print it.

survival_rate = passenger_data[['Pclass', 'Sex','Survived']].groupby(['Pclass', 'Sex'], as_index = False).mean().sort_values('Survived', ascending = False)  print(survival_rate)

Output:

  Pclass     Sex  Survived  0       1  female  0.968085  2       2  female  0.921053  4       3  female  0.500000  1       1    male  0.368852  3       2    male  0.157407  5       3    male  0.135447

6. Change the Data Type of Passenger Id, Survived, and Pclass to String

Since some of the columns are of different data types, we convert all those columns to a fixed data type. i.e, string.

Cols = [ 'PassengerId', 'Survived', 'Pclass' ]  for index in Cols:       passenger_data[index] = passenger_data[index].astype(str)  passenger_data.dtypes

7. Duplicated Rows in the Dataset

While doing the data modeling, our performance can decrease if duplicated rows are present. So, it's always recommended to remove the duplicated rows.

passenger_data.loc[passenger_data.duplicated(), :]

8. Creating the Histograms to Check Data Distribution

To find the distribution of the survived columns based on the possible values of that column so that we can check the class biasness and if there are any issues, we can apply techniques such as Oversampling, undersampling, SMOTE, etc. to overcome that issue.

sb.set_style("white")  g = sb.FacetGrid(data = train[train['Age'].notna()], col = 'Survived')  g.map(plt.hist, "Age");

Output:

Exploratory Data Analysis Techniques for Unstructured Data
Fig. 3 | Image by Author

Now, if we compare the above two distributions then it is recommended to use the relative frequency instead of the absolute frequency by using Cumulative density function, etc. Since we have taken the example of the Age column, the histogram with absolute frequency suggests that there were many more victims than survivors in the age group of 20–30.

9. Plot Percentage of Missing Values in Age by Survival

Here we have created the pie chart to find the percentage of missing values by Survival values and then see the partition.

dt0 = train['Age'][train['Survived']=='0']  dt1 = train['Age'][train['Survived']=='1']  plt.figure(figsize = [15, 5])    plt.subplot(1, 2, 1)  age_na_pie(dt0)  plt.title('Survived: No');    plt.subplot(1, 2, 2)  age_na_pie(dt1)  plt.title('Survived: Yes');

Output:

Exploratory Data Analysis Techniques for Unstructured Data
Fig. 4 | Image by Author

The pie plots show that passengers with missing ages were more likely to be victims.

10. Finding the Number of Missing Values in each Column

passenger_data.isnull().sum()

From the output, we have observed that the column "Cabin" has the maximum missing values so we will drop that column from our analysis.

11. Percentage of Null Values per column

passenger_data.isna().sum()/passenger_data.shape[0]

In age column, approximately 20% of data is missing, approximate 77% of data in Cabin Columns is missing, and 0.2 percent of data in Embarked column is missing. Our aim is to handle the missing data before modeling.

12. Drop the Cabin Column from the Dataset

Drop the cabin column, as it has many missing values.

drop_column = passenger_data.drop(labels = ['Cabin'], axis = 1)  print(drop_column)

To handle the "Age" column, firstly, we will check the data type of the age column and convert it to integer data type and then fill all the missing values in the age column with the median of the age column.

datatype = passenger_data.info('Age')  fill_values = passenger_data['Age'].fillna(int(passenger_data['Age'].median()),inplace=True)  print(fill_values)

After this, our dataset looks good regarding missing values, outliers, etc. Now, if we apply machine learning algorithms to find the patterns in the dataset and then test on the testing data, the model's performance will be more compared to data without preprocessing and exploratory data analysis or data wrangling.

Summary Insights from EDA

Here are survivors' characteristics compared to victims.

  1. Survivors were likely to have parents or children with them; compared to others, they had more expensive tickets.
  2. Children were more likely to survive than victims of all age groups.
  3. Passengers with missing ages were less likely to be survivors.
  4. Passengers with higher pclass (SES) were more likely to survive.
  5. Women were much more likely to survive than men.
  6. Passengers at Cherbourg had a higher chance of survival than Queenstown and Southampton passengers.

You can find a colab notebook here for the complete code — Colab Notebook.

Conclusion

This ends our discussion. Of course, there are many more techniques in EDA than I just covered here, which depend on the dataset we will use in our problem statement. To sum up, the EDA, knowing your data before you use it to train your model with it, is beneficial. This technique plays a crucial role in any Data Science Project, allowing our simple models to perform better when used in projects. Therefore, every aspiring Data Scientist, Data Analyst, Machine Learning Engineer, and Analytics Manager needs to know these techniques properly.

Until then, keep reading and keep learning. Feel free to contact me on Linkedin in case of any questions or suggestions.
Aryan Garg is a B.Tech. Electrical Engineering student, currently in the final year of his undergrad. His interest lies in the field of Web Development and Machine Learning. He have pursued this interest and am eager to work more in these directions.

More On This Topic

  • Statistical and Visual Exploratory Data Analysis with One Line of Code
  • 11 Essential Code Blocks for Complete EDA (Exploratory Data Analysis)
  • A Lightning Fast Look at Single Line Exploratory Data Analysis
  • How Visualization is Transforming Exploratory Data Analysis
  • Powerful Exploratory Data Analysis in just two lines of code
  • Effective Visualization Techniques for Data Discovery and Analysis

How to use ChatGPT: What you need to know now

ChatGPT on mobile

ChatGPT, OpenAI's most popular endeavor thus far, has kickstarted an artificial intelligence (AI) revolution since its launch in late 2022. The AI chatbot has been dominating headlines and has preoccupied the minds of those running Twitter, Google, Amazon, Microsoft, Meta, other tech experts, and, more recently, music labels. The AI language model became the fastest-growing 'app' of all time, even surpassing TikTok.

Also: The new AI-powered Bing is now open to everyone — with some serious upgrades

The ChatGPT model is certainly not underrated; users are coming up with creative ideas for prompts, such as asking questions in search of funny answers, creating content, improving their writing or Excel skills, finding and correcting a bug in code, or summarizing a book. Some even wonder if the AI chatbot could replace programmers, writers, and even doctors, and how it could revolutionize different industries.

Also: I used ChatGPT to write the same routine in these ten obscure programming languages

Across all these areas, one thing is clear: the genius of this AI tool isn't in how innovative the idea of it is, but in how well it performs text generation and how accessible and easy to use it is. ChatGPT can hold conversational text interactions with users by employing AI, and these exchanges can feel as natural as if you're having a conversation with another person.

How to use ChatGPT

If you haven't created an account, click on Sign Up. Otherwise, log in with your OpenAI credentials.

Refer to the numbered list above to learn how to use the ChatGPT window.

Start writing in the text box at the bottom of the page. Then, press Enter to submit your prompt.

ChatGPT prompt examples

ChatGPT can generate responses to prompts (a feature that could eventually challenge search engines) that are enough to become an important tool for content generation from writing essays to summarizing a book for you, but it can also write and fix code, make calculations, help you compile your resume, translate information, and more. Here are examples of prompts you could start with:

  • How does a computer store and process information?
  • Analyze this code and tell me how to fix it: [Paste the code]
  • Write a poem about a migraine in the style of Walt Whitman.
  • Write a country song about a dog named Speckles who loves to run.
  • Write a plugin for —— that does ——
  • What is the difference between a virus and a bacterium?
  • Write a sick note for my child who is missing school.

FAQs

What is ChatGPT?

ChatGPT is a large language model that uses artificial intelligence to hold text conversations with users that can feel natural, as if you were asking someone questions.

Also: Can AI detectors save us from ChatGPT? I tried 3 online tools to find out

The human-like responses are useful when translating from one language to another, looking for instructions on how to do something, and generating written content.

How does ChatGPT work?

ChatGPT uses reinforcement learning with human feedback (RLHF) to intelligently process its environment, using human demonstrations to adapt to different situations with learned, desired behaviors.

Also: How to save a ChatGPT conversation to revisit later

ChatGPT was trained on a substantial amount of data prior to its research preview, and continues learning through the human knowledge users provide, making it able to give educated responses to a vast variety of topics.

How do I register for ChatGPT?

In order to register for ChatGPT, all you need to do is sign up for a free OpenAI account using your email address.

How can I access ChatGPT?

You can access ChatGPT by going to chat.openai.com and logging in. If you're on OpenAI's website, you can log in to your account, then scroll down until you see ChatGTP on the bottom left corner of the page, and click on it to start chatting.

Is ChatGPT free?

Yes, you can use ChatGPT for free — for now. Since the natural language processing model is still in its research and "learning" preview phase, people can use it for free; all you need is to register for a free OpenAI account, though there is an option to upgrade to a paid membership.

The key differences between a free account and ChatGPT Plus.

OpenAI launched ChatGPT Plus for customers who want to have unlimited access without black-out windows during peak times, faster responses, and priority access to new features, for $20/month.

Also: The best AI art generators: DALL-E 2 and alternatives

It's also based on GPT-4, which is a more advanced language model than in the free version of ChatGPT.

What can I use ChatGPT for?

Your imagination is the limit. Have fun with different ChatGPT prompts. ZDNET's David Gewirtz asked the AI chatbot to write a WordPress plugin for him and used it to help him fix code faster, for example. He also asked it to write a Star Trek script.

Also: How to use ChatGPT to write code

Others are using it to write malware. One professor is promoting the use of ChatGPT in his classroom and countless other teachers are using it even more than their students. Here are a few other ideas you could try:

  • Write a song about [insert topic here] — Try adding multiple details.
  • Write a poem about [insert topic here] — Again, add as many details as you can think of.
  • Ask it philosophical questions.
  • Ask it to summarize ideas or concepts.

The more details you write in your prompts, the more precise the answers will be.

Also: How does ChatGPT work?

ChatGPT could one day replace and, in the case of Bing, enhance search engines. Though the text bar in ChatGPT isn't a search bar, Microsoft introduced an AI-powered Bing search engine that is connected to the internet, making it able to provide answers to questions that ChatGPT can't handle.

Can I use ChatGPT on mobile?

Although there is no ChatGPT mobile app, you can use the AI-based tool from your mobile browser on your smartphone.

Also: How to use ChatGPT to write Excel formulas

The steps to use OpenAI's ChatGPT from your phone are the same as above: go to chat.openai.com, log in, accept the terms, and start typing. The AI assistant will work just as it would when you access it from your computer.

What is ChatGPT Legacy and ChatGPT Default?

ChatGPT runs on the large language model called GPT 3.5, but there are two versions of this model, the Legacy and the Default. The Legacy model is the one you access when you log in to a free account using your OpenAI account credentials and use ChatGPT.

The Default model does a better job of explaining and understanding nuances most of the time, and is more loyal to the developers' efforts to prevent generation of inappropriate content than the Legacy model. This model was also the one available to paid ChatGPT Plus users before the release of GPT-4.

Also: ChatGPT is the most sought out tech skill in the workforce, says learning platform

In day-to-day use, both Legacy and Default are similar, but Default is superior. GPT-4 is faster, more accurate, and larger than both models of GPT-3.5.

According to the May 3 ChatGPT release, all new messages will use the Default model, as OpenAI prepares to deprecate the Legacy model on May 10th.

Is ChatGPT the best AI?

If you're trying to figure out which is the best AI chatbot, you may wonder how OpenAI's ChatGPT compares to others, like Google Bard and Microsoft's AI-powered Bing. The rise in ChatGPT's popularity can largely be attributed to the expert combination of wide accessibility, knowledge, and fluidity in conversations.

Also: ChatGPT vs. Bing Chat: Which AI chatbot should you use?

Bard and Bing Chat are available on a more limited preview. Compared to ChatGPT, Bing Chat is more based on its search-engine nature, as it combines GPT-4 and gathers information from the internet, even quoting the sources for the web pages where it got its response.

Can ChatGPT refuse to answer my prompts?

AI systems like ChatGPT aren't all-powerful; they can and do reject inappropriate requests. Aside from having limited knowledge, the AI assistant is able to distinguish inappropriate requests to prevent the generation of unsafe content.

Also: 6 things ChatGPT can't do (and another 20 it refuses to do)

This includes questions that violate someone's rights, are offensive, are discriminatory, or involve illegal activities. The ChatGPT model can also challenge incorrect premises, answer follow-up questions, and even admit mistakes when you point them out.

Does ChatGPT give everyone the same answer?

ChatGPT can generate essays, write code, and more from user queries.

Most of the time, when different people ask ChatGPT the same question, they will get the same answer. There might be a few variations in words, but they will be almost identical.

Also: I tried Bing's AI chatbot, and it solved my biggest problems with ChatGPT

If someone wanted to determine whether an article was written by ChatGPT, or if a professor wanted to see if the language model was used for an essay by a student, asking ChatGPT the same question the article or essay was based on could help figure it out. ChatGPT also tends to generate more polite prose than human writers.

Does ChatGPT give wrong answers?

ChatGPT, like all language models, is not without limitations and can give nonsensical answers and incorrect information, so it's important to double-check the data it gives you. It's constantly learning from the text data it is provided by users and the results it finds online, which can make it prone to misinformation.

OpenAI recommends users provide feedback on what ChatGPT tells them using the thumbs up and down buttons, in order to improve the model. Even better, you could become part of the company's Bug Bounty program to earn up to $20,000 by reporting concerning finds and safety issues.

Also: OpenAI will pay you to hunt for ChatGPT bugs

The AI chatbot is not connected to the internet and is unable to determine the current date, so asking ChatGPT how many days until Easter won't get you an exact number of days, for example — this is one of the ways ChatGPT differs from search engines.

Will my conversations with ChatGPT be used for training?

When you're familiarizing yourself with how to use ChatGPT, you may wonder if your specific conversations will be used for training and, if so, then who can view your conversations. Your conversations can be viewed by OpenAI and used as training data to refine its systems, so I wouldn't enter any personal or private information.

Also: Teachers are using ChatGPT more than students. Here's how

The prompts you enter when you use ChatGPT are also permanently saved to your account and you won't be able to delete specific prompts unless you delete your whole account. If you'd like to delete your account, follow these steps:

  1. Log into your OpenAI account.
  2. Go to Help.
  3. On the bottom of the pop-up, select Messages.
  4. Click on Send us a message.
  5. Choose Account Deletion from the available options.
  6. Follow the prompts to delete your account and data.

Why is ChatGPT saying my access is denied?

During peak times, you may be unable to access ChatGPT. If you're seeing a message that it's at capacity, you can refresh the page or sign up to receive an email when it's available again.

Also: 5 ways to use chatbots to make your life easier

Aside from reaching capacity, access to ChatGPT can be denied for various reasons — mine gets denied while using a VPN, for example. If you're getting a message when trying to use ChatGPT that says your access is denied, it may be one of these issues:

  • Violation of the API's terms of service.
  • User trying to access an unavailable version of GPT.
  • The API key may be invalid.
  • User has exceeded usage limits.
  • Violation of the OpenAI API terms of service.

See also

Geoffrey Hinton Is The Bad Dad of AI

The touted grand dad of deep learning, Geoffrey Hinton recently quit Google so he could talk more freely about the threats posed by AI. He will also be responding to requests for help from Bernie Sanders, Elon Musk and the White House, he said. But a few years ago when the ethical team at Google raised alarm about the big tech’s unethical practices, the AI Prometheus turned a blind eye towards the subject.

Hinton announced his exit through a detailed New York Times interview. He expressed concern over the potential AI danger, likening it to the creation of the atomic bomb by Robert Oppenheimer’s work on the Manhattan Project to develop the world’s first nuclear bombs during World War II. The 75-year old polymath believes that the pursuit of profit in AI development could lead to a world where AI-generated content outnumbers that produced by humans, thereby endangering our very survival.

The Oppenheimer Fallacy

In the past, when asked about the potential harm of AI, Hinton paraphrased Oppenheimer, saying that when one encounters something technically sweet, one goes ahead and pursues it.

However, Hinton now expresses regret over the consequences of his work. He acknowledges that the far fetched idea of machines surpassing human intelligence, is now a realistic possibility. Hinton, who previously believed that such advancements were still “30-50 years away”, cites the recent advancements in large language models, particularly OpenAI’s GPT-4, as evidence of how quickly machines are advancing.

He said, “Look at how it was five years ago and how it is now. Take the difference and propagate forwards. That’s scary.”

Interestingly, the nuclear weapons analogy also resurfaced in the Stanford Artificial Intelligence Index Report 2023 which was released last month. The breadth of AI’s applications is unlike any other field. The report notes that 36% of NLP (natural language processing) researchers polled think that artificial general intelligence (AGI) could lead to a catastrophic event on par with a nuclear disaster.

While the analogy provides a helpful point of reference, it has its limitations. AI incorporates elements from social media to nuclear weapons. Hence, analogies, like the Oppenheimer comparison, can be illuminating yet incomplete when describing the scope of AI.

Hinton on the fence

Addressing the NYT article by Cade Metz that suggested Hinton left Google in order to criticize the company, he clarified that he left Google to speak out about the dangers of AI without being constrained by any potential impact on the company. He further noted that Google has acted responsibly in its pursuit of AI.

But we do not agree with him.

Ethically, Google has been in a state of flux since 2020. The big tech fired its ethics team. Prominent Black female scientist Timnit Gebru, who was the first one to show the exit door, responded to Hinton’s quitting saying, “When Geoff Hinton was asked about the women’s concerns about AI for which we got fired, pushed out, harassed he said our concerns are not “existential risks” to humanity where as his are. This is what I mean about the almost exclusively white dudes who keep on talking about “existential risk.”

Margaret Mitchell, a former leader on Google’s AI ethics team, is also upset that Hinton didn’t speak out about ethical concerns related to AI development during his decade in a position of power at Google, especially after the 2020 ouster of Timnit Gebru, who had studied the harms of large language models before they were widely commercialized into products such as ChatGPT and Google’s Bard.

In 2018, Hinton had dismissed the need for Explainable AI. He voiced his opposition, arguing that it would be a “complete disaster” if regulators insisted that AI systems be explainable. The Canadian AI pioneer claimed that most people cannot explain how they work, and therefore requiring such explanations for AI systems would be counterproductive.

Since the 1970s Hinton has been at the forefront of developing models for visual understanding of the human brain. Although he had previously maintained a detachment from the social impact of his work and claimed “I’m an expert on trying to get the technology to work, not an expert on social policy”, Hinton’s resignation cited concerns over the dangers of AI as the reason for his change of heart. This shift in stance suggests that Hinton can no longer remain neutral and must now acknowledge the potential impact of his work on society.

The post Geoffrey Hinton Is The Bad Dad of AI appeared first on Analytics India Magazine.

How This Startup Witnessed 400% Revenue Growth With Cloud Telephony Services 

Cloud telephony has been steadily growing in India over the past few years, with increasing demand for cost-effective and efficient communication solutions among businesses.

The market is expected to continue growing in the coming years due to the country’s large and diverse business landscape, and the need for remote and flexible communication tools.

Founded in 2018, CloudConnect became the first company in India to get a licence from the Department of Telecommunication (DOT) to operate as a Business to Business Virtual Network Operator (VNO).

“While large organisations were migrating to the cloud, there wasn’t any concrete focus on SMEs. We noticed that if somebody comes into the picture and can cater to the requirements of these small businesses where they only want to focus on their core business rather than focusing on communications and other services which are there, it could really make a difference,” Vidhu Nautiyal, Co-founder and Chief Revenue Officer, CloudConnect, told AIM.

The company gives access to SMEs to enterprise communication systems such as Private Branch Exchange (PBX) on Mobile, IP Phone Solutions, Unified Communications, and Customised Business Communication Solutions.

It’s India’s first fully mobile advanced enterprise communication and collaboration system, which is hosted on the cloud and can be accessed through a user-friendly app on individual smartphones.

The company has no competition either. “We don’t have any direct competitors. Which means apart from telcos like Airtel, Vodafone and Reliance Jio, I would say, there is no other player which is providing pure Unified Communication Services in India apart from us,” Nautiyal said.

Impressive growth

Being the first licensed VNO gave CloudConnect a significant advantage. In the financial year 21-22, the company managed to grow around 400% in terms of revenue. In 2022 alone, it grew at an impressive rate of 261%.

So far, CloudConnect has onboarded clients such as Shipyaari, Univo, Real11, My Real Estate, Apollo Pharmacy, AHFL, ICICI Prudential.

For Shipyaari, one of their biggest challenges was connecting their field force with their customers and office. To solve this challenge, they integrated CloudConnect’s voice API into their application which helped bring the entire organisation’s communication channels onto one platform, resulting in improved delivery times and streamlined communication.

Similarly, by integrating CloudConnect’s API in their CRM, Amity University has managed to increase their reach to customers by 40%. Further, the company has also helped Felix Hospital, which is based in Noida, set up the backend system to manage the COVID helpline in real-time, ensuring that no calls were missed.

Currently, CloudConnect is providing its products and services to customers in Tier 1 cities only; however, they are soon planning to be available across the country.

Besides, in the three years since its inception, the startup has on-boarded around 350 customers. Nautiyal revealed that the company is growing at a rate of almost 100% when it comes to customer acquisitions.

“This is just the tip of the iceberg. We are surely going to be growing more because as we speak, we are getting a lot of queries,” Nautiyal said.

Strategic partners

CloudConnect has their own private data centre. “We do have a private hosting which is available in Asia’s largest tier four data centre, and we do have our hosting in multiple places like Delhi, Mumbai and Bangalore.”

The company is also in strategic partnership with CRM providers such as Zoho, Freshdesk and Salesforce. “At the end of the day, these are the people who are providing the CRM solutions. What we do is we integrate our telephony into their system and then they go ahead and take it to the market,” Nautiyal said.

Currently, the company is gearing up to launch a voice sentiment analysis engine for the very first time in India. Built with collaboration with IIT Delhi, the new tool will analyse the tone and sentiment of a caller to a customer service agent in real-time using natural language processing (NLP).

“This information will then be displayed on the agent’s screen, allowing them to adjust their approach accordingly. If the sentiment is negative, the call will be escalated to a supervisor who can intervene and address the issue,” Nautiyal said.

AI and Analytics Play

CloudConnect extensively uses AI, ML and Analytics. “We provide dashboards where very extensively we use the AI and analytics,” Nautiyal said. The dashboard has around 50 different widgets which run data in real-time using AI.

However, when it comes to generative AI, the case is a bit different. With the launch of Generative AI models like GPT-3, GPT-4, Midjourney, Stable Diffusion etc, enterprises are increasingly looking to leverage these technologies for different purposes. However, he also said the company currently has not found a use case for generative AI.

“Nonetheless, if we find a use case, then we would be more than happy to integrate it,” Nautiyal concluded.

The post How This Startup Witnessed 400% Revenue Growth With Cloud Telephony Services appeared first on Analytics India Magazine.

ChatGPT and the new AI are wreaking havoc on cybersecurity in exciting and frightening ways

Generative artificial intelligence is transforming cybersecurity, aiding both attackers and defenders. Cybercriminals are harnessing AI to launch sophisticated and novel attacks at large scale. And defenders are using the same technology to protect critical infrastructure, government organizations, and corporate networks, said Christopher Ahlberg, CEO of threat intelligence platform Recorded Future.

Generative AI has helped bad actors innovate and develop new attack strategies, enabling them to stay one step ahead of cybersecurity defenses. AI helps cybercriminals automate attacks, scan attack surfaces, and generate content that resonates with various geographic regions and demographics, allowing them to target a broader range of potential victims across different countries. Cybercriminals adopted the technology to create convincing phishing emails. AI-generated text helps attackers produce highly personalized emails and text messages more likely to deceive targets.

"I think you don't have to think very creatively to realize that, man, this can actually help [cybercriminals] be authors, which is a problem," Ahlberg said.

Also: AI could automate 25% of all jobs. Here's which are most (and least) at risk

Defenders are using AI to fend off attacks. Organizations are using the tech to prevent leaks and find network vulnerabilities proactively. It also dynamically automates tasks such as setting up alerts for specific keywords and detecting sensitive information online. Threat hunters are using AI to identify unusual patterns and summarize large amounts of data, connecting the dots across multiple sources of information and hidden patterns.

The work still requires human experts, but Ahlberg says the generative AI technology we're seeing in projects like ChatGPT can help.

"We want to speed up the analysis cycle [to] help us analyze at the speed of thought," he said. "That's a very hard thing to do and I think we're seeing a breakthrough here, which is pretty exciting."

Ahlberg also discussed the potential threats that highly intelligent machines might bring. As the world becomes increasingly digital and interconnected, the ability to bend reality and shape perceptions could be exploited by malicious actors. These threats are not limited to nation-states, making the landscape even more complex and asymmetric.

Also: ChatGPT is more like an 'alien intelligence' than a human brain, says futurist

AI has the potential to help protect against these emerging threats, but it also presents its own set of risks. For example, machines with high processing capabilities could hack systems faster and more effectively than humans. To counter these threats, we need to ensure that AI is used defensively and with a clear understanding of who is in control.

As AI becomes more integrated into society, it's important for lawmakers, judges, and other decision-makers to understand the technology and its implications. Building strong alliances between technical experts and policymakers will be crucial in navigating the future of AI in threat hunting and beyond.

AI's opportunities, challenges, and ethical considerations in cybersecurity are complex and evolving. Ensuring unbiased AI models and maintaining human involvement in decision-making will help manage ethical challenges. Vigilance, collaboration, and a clear understanding of the technology will be crucial in addressing the potential long-term threats of highly intelligent machines.

Also: How ChatGPT works

Ahlberg also raised concerns about China, Russia, and economic adversaries deploying autonomous machines. These countries likely won't slow down AI development or share ethical considerations. While having the ability to "pull the plug" on such machines is a smart safeguard, he suggests that the integration of technology into society and the global economy will likely make it hard to detach. Ahlberg emphasizes the need to design products and machines with clarity about who controls them.

"The big thing that the internet did in all of this is that the internet sort of became the place where all the world's information migrated," said Ahlberg. "These large language models are doing pretty magical things… to speed up that thinking cycle."

He added, "In the next 25 years, the world becomes a reflection of the internet."

See also