Don’t get scammed by fake ChatGPT apps: Here’s what to look out for

ChatGPT open on phone that is on a keyboard

If you're a fan of ChatGPT and its capabilities, you're probably curious if there is a mobile app you can download for on-the-go chatbot conversations. Well — there is, but for now, it's only available in the U.S. on iOS.

But some people have found it difficult to find OpenAI's official ChatGPT app in the App Store, leaving sketchy and scammy apps more visible to download. Most apps on the App Store that claim to use OpenAI's technology that powers ChatGPT aren't legit, and your personal information could be at risk.

Also: How to use ChatGPT in your browser with the right extensions

Scammy apps will ask for unnecessary information and permissions, unload malware onto your device after downloading, or trick you into paying lots of money for a useless subscription. Here are some tips to prevent downloading an app with malicious intent.

1. Check the app permissions

Some of these apps are a privacy risk, as the app's permissions are unnecessarily overreaching compared to the app's purpose.

Before downloading an app that claims to be ChatGPT-adjacent, check out its app permissions. Is there a good reason why a chatbot needs to access your contacts?

2. Double-check the developers

OpenAI is the developer behind ChatGPT. So, any other chatbot apps on the App Store and all chatbot apps on the Google Play Store are products of other developers.

Also: The best AI chatbots

You should check a developer's profile on the App Store to see what other apps they are responsible for and find out more information about their company. If an app claims to be ChatGPT and the developer is not OpenAI, that app is not ChatGPT.

3. Check the reviews

Like most review sections, the reviews raving about the app will be at the top of the reviews section, giving you the idea that most people enjoy the app. But many fake app creators pay for positive app reviews.

Also: How I tricked ChatGPT into telling me lies

Sometimes, an app will have hundreds of five-star reviews, and with just a quick glance over the reviews section, you're convinced the app is worth installing. Be sure to check out the one and two-star reviews, as these tend to be the most honest reviews.

Some examples

An app on the App Store called ChatOn claims to be powered by ChatGPT and GPT-4. Out of around 19,000 reviews, it has an average rating of four and a half stars. But many of the one-star reviews complain of expensive subscription prices and that users must buy a subscription after one free conversation with the chatbot.

Also: I asked ChatGPT, Bing, and Bard what worries them. Google's AI went Terminator on me

Another app, Genie, also claims to run on ChatGPT and GPT-4. But users say the chatbot often hallucinates, which is when a chatbot responds with incorrect information. Other users say they had trouble canceling their subscriptions that are charged to their cards weekly.

OpenAI's ChatGPT, Google's Bard, and Microsoft's Bing Chat are all free to use on the web. To avoid paying a weekly or monthly fee, consider opening your Safari or Chrome browser and using this free and secure software.

See also

How FinTechs Combat Data Struggles

A domain where data forms the crux of the business — gathering, storing and managing — comes with its struggles. Fintech startups work with copious amounts of data that are collected upfront, which puts the focus on the confidentiality and sensitivity of the data that is managed. And with fintech growth on an upward trajectory, it becomes critical to manage data, which is considered the biggest technical challenge.

Data issues in leveraging data for AI can have significant consequences such as difficulty in innovating owing to a lack of clarity on the types of products and services that customers demand, and a lack of insight into the businesses themselves. If these data issues are not addressed, the progress of FinTech companies are massively jeopardized. In addition, fintechs also face the issues of data security and deployment across multiple cloud platforms.

As per a report by Tracxn, there are over 8000 fintech startups in India. During the period 2014 to mid-2022, the sector received more than $30 billion in funding. But, data problems persist.

Small, medium, big problems

Varun Modi, co-founder and CTO of BrightMoney, recently spoke about the struggles and the various challenges that exist with data. He emphasised factors such as costing, data regulation and data privacy. Giving an example of how different people access data differently, he explained, a product manager will have a set of skills and tools different from a data scientist. The need for a variety in data storage will increase the cost. Having a ‘centralised storage’ system will help bring all the data from the application system in the right format for business consumption. This will improve the system to deliver maximum efficiency and answer business queries in a structured manner.

In the realm of fintech and other regulated industries, security and governance are of utmost importance. Understanding how user information is accessed and how their financial transactions are utilised becomes crucial. Having data governance frameworks in fintech ensures responsible and effective data management. Having regulatory systems will determine the entities authorised to manipulate data which enables effective management of complex situations that may arise from third-party systems.

With fintech companies interacting with various third-party partners, seamless integration becomes a necessity. This also means that the problems of confidentiality and security pops up with data sharing. The partners provide financial data, activity data, and Personally Identifiable Information (PII), making collaboration crucial. Fintech companies bear the responsibility of safeguarding the confidentiality and security of such data which implies that robust security measures are necessary to prevent unauthorised access and misuse.

Implementing Data Mesh

Data Mesh, a type of data management architecture, can be brought into the fintech domain. With data mesh, each domain can have ownership over its data allowing domain teams to take responsibility for data quality, access and governance. This allows an enhanced data management process thereby empowering teams to make data-driven decisions. With the creation of data products and data services that can be shared across domains, a smooth collaboration between teams that can leverage data insights and expertise can be achieved.

Data Analysis Tools

Increased implementation of AI business tools that help with analysing large volumes of data have made functionings smoother. AI data mining tools help fintech companies to handle multiple layers of data. As opposed to manual data retrieval, which can provide incomplete and unnecessary data, AI and ML can assist in ingesting, analysing, cleaning and archiving data.

With the increased usage of AI in different domains, banking and fintech companies are also increasingly using them. It is not only helping them with data-driven decisions but also smoothen data related struggles. Wide facets such as data modelling, data governance and data management are addressed with AI.

The post How FinTechs Combat Data Struggles appeared first on Analytics India Magazine.

IT Staff Augmentation: How AI Is Changing the Software Development Industry

IT Staff Augmentation: How AI Is Changing the Software Development Industry
Image from Bing Image Creator

The advent of Artificial Intelligence (AI) is widely acknowledged as a game-changer. Its nature promises opportunities as much as challenges for almost every business or industry. We’re taking a look at them in relation to software development, especially, today.

More and more seen as a threat bound to replace human developers, AI and related tools can also be a benefit that makes our lives easier by handling time-consuming and routine tasks. Either way you have it, IT staff augmentation has certainly become an effective resource for the tech industry. With its emergence, it presents an opportunity for even greater efficiency and innovation in software development.

With AI’s changes to our niche, it's certainly important for developers to understand how it's affecting their profession. In this article, we'll provide insight into the ways in which AI is transforming the industry, whether you're looking to embrace or resist this emerging technology.

How Have AI and IT Staff Augmentation Helped the Software Development Industry? Here are some tasks with which AI is helping teams become even more efficient:

Taking Care of Software Testing

Software testing is an area in which developers prefer to let AI take charge. It can help write test cases to quickly discover bugs. Engineers can also use AI algorithms for parts of the testing cycle (mostly exploratory ones) that rely on creativity and intuition to identify bugs.

While AI testing can sometimes be superior, it is still far from replacing human developers. Humans seem to have a better understanding of user interfaces and can judge emotions more accurately, which AI is currently unable to do. However, AI serves as a useful tool to simplify and optimize software testing.

Making Crucial Decisions

AI or Machine Learning (ML) tools are also unable to engineer programs without assistance. Their knowledge is confined to big data sets that developers feed them through machine learning algorithms. Once data scientists generate a reliable data set based on high-quality programs, however, these tools can analyze problems and answer questions almost immediately. Human analysts could spend hours doing the same job.

Therefore, the right data can mean AI assistants are able to make decisions regarding frameworks and KPIs while also determining necessary or optional features in an app.

Double-checking and Fixing Bugs

In order to consider how AI assistants have become one of the most popular tools among software developers, we need to account for how much they can help complete codes, double-check for errors, and search through instructions and documents. Some of these tols can even analyze problems, make proper use of libraries, help developers write code in different languages, and offer other practical solutions.

Monitoring Real-time User Feedback

Real-time feedback is also crucial for software developers, whether a software is in its early stages or already released. This feedback helps developers continuously adjust their projects and tailor experiences and resources to specific uses, ensuring overall success.

In many cases, developers can only improve an app by conducting extensive testing or allowing users to send feedback. This is particularly true in the case of messenger apps, which constantly enhance their user interface and experience (UI/UX) based on real-time feedback from AI assistants and user testing.

Developers can also use machine learning to monitor user behavior in certain situations. That data helps further fix bugs and any errors users may encounter. Consider complaint and abandon rate drops a side benefit here.

Another notable example of real-time feedback is using AI to offer personalized content based on data collected from user activities.

Handling Time-consuming, Routine Tasks

Using AI and ML tools without human oversight can be a waste of time and money and pose legal risks. In light of this, developers should factor in just how impossible it is for AI assistants to perform software engineering tasks independently and yet how easily they can take over other type of tasks, instead, such as debugging and compilation.

This fact alone can quickly shift engineers’ efforts from an AI focus on certain areas to rely on these tools for other type of jobs that would otherwise take so much time from their busy schedules. Being able to get AI assisance on items or taskts that would take humans much longer can also mean additional time that engineers can use working on more creative elements.

A blank screen can be to a software developer what a blank canvas would be to a painter. Engineers can seize all their tools at hand to go from dealing with repetitive tasks to devoting more time on areas in which AI cannot be of help just yet.

Analyzing User Behavior

Ever wondered why so many software solutions today are user-friendly, too? One key reason is how developers have learned to understand user behavior, which has allowed them to create products that meet and surpass user needs. By using AI to analize how users interact with a program, they can easily pinpoint certain problems and solve them before they impact the user experience.

As we know, fixing bugs early on in our processes is much more affordable than managing an unexpected need for updates. With AI predictive analytics, developers can anticipate how users interact with a program based on their past experiences with similar apps. Different use cases help developers cater to wider audiences.

Stay Updated on AI Developments and IT Staff Augmentation Possibilities

AI, ML, deep learning, natural language processing (NLP) and other artificial intelligence tools have dramatically changed software development over the recent years. They have pushed the boundaries of what machines can do. Now, these advanced tecnologies can better mimic human programming skills to a great extent, leading to new possibilities and changing the way we create software. Staying up-to-date with the latest AI, ML and IT Staff Augmentation trends in software development is essential to making the most of these tecnologies.

Yet, all available evidence indicates that, as of now, AI or ML tools are nowhere near advanced enough to replace human developers. They can be excellent assistants that take care of mundane tasks, however, offer real-time feedback, and help us understand user behavior. How are you seizing AI and related tools in your software development journey from here?

More On This Topic

  • 3 Data Acquisition, Annotation, and Augmentation Tools
  • The Future of Work: How AI is Changing the Job Landscape
  • How to Grow as a Data Scientist in an Ever-Changing World
  • Software Developer vs Software Engineer
  • MLOps Is Changing How Machine Learning Models Are Developed
  • Development & Testing of ETL Pipelines for AWS Locally

Course5 Raises USD 55 Million To Invest In AI 

Course5 Intelligence, an analytics and AI solutions company, has successfully raised USD 55 million in a recent funding round. The round was led by 360 ONE Asset Management Limited’s Tech Fund, formerly known as IIFL Asset Management Limited that invested USD 28 million.

Course5 is supported by continuous innovation in their AI Labs and leverages global open research. The company’s platforms integrated with OpenAI’s GPT models enable data-driven insights for the businesses.

The funds raised will be used to supplement a strong organic growth with inorganic expansion and synergistic acquisitions. The company also intends to increase AI investments and is currently in discussions with several M&A prospects to add strategic capabilities or intellectual property. The company aims to surpass $100 million in revenue in the next fiscal year and plans to launch its IPO within the next 18 months.

Ashwin Mittal, Chairman and CEO of Course5 Intelligence, expressed his confidence in the company’s trajectory and the favorable industry trends, stating that raising external capital at this time is the right move. Mittal emphasized the strong demand for Course5’s analytics and AI solutions from both existing and new clients..

Chetan Naik, Fund Manager and Senior EVP at 360 ONE Asset, commented on the expected strong growth of the data analytics sector in the coming decade. He acknowledged Course5 as a leading player in the data analytics and insights field, with strong IP-led solutions and deep domain knowledge across various industries. Naik highlighted Course5 as one of the few profitable and highly capital-efficient pure-play data analytics companies based in India.

The post Course5 Raises USD 55 Million To Invest In AI appeared first on Analytics India Magazine.

Bayesian vs Frequentist Statistics in Data Science

Bayesian vs Frequentist Statistics in Data Science
Image by Author

Before we get into the differences between Bayesian and frequentist statistics, let’s start with their definitions.

What is the Bayesian Approach?

When using statistical inference, you are making judgments about the parameters of a population using data.

Bayesian inference takes into consideration prior knowledge, and the parameter is taken as a random variable. Meaning there is a probability that the event will occur. For example, if we were to flip a coin, Bayesian inference will state that there is no wrong or right answer, and the probability of the coin landing on heads or tails is down to their perspective.

The Bayesian perspective is based on Bayes’ Theorem, a formula that takes into account the probability of an event based on prior knowledge. The formula is shown below, where:

  • P(A): the probability of A occurring
  • P(B): the probability of B occurring
  • P(A|B): the probability of A given event B
  • P(B|A): the probability of B given event A
  • Pr(A|B): the posterior, the probability of the parameters given the data.

Bayesian vs Frequentist Statistics in Data Science
Image by Wikipedia

People that have a Bayesian mindset, view and use probabilities to measure the likelihood of an event happening. It is what they believe. The probability of a hypothesis is calculated and deemed true using prior opinions and knowledge as new data is readily available. This is called prior probability, which is concluded before the project starts.

This prior probability is then converted into a posterior probability, the belief once the project has started.

Prior + Likelihood = Posterior

What is the Frequentist Approach?

Frequentist inference is different. It assumes that events are based on frequencies, and the parameter is not a random variable- meaning there is no probability. Using the same example as above, if you were to flip a coin — frequentist inference will state that there is a correct answer based on frequency. If you were to toss a coin and get tails half of the time, then the probability of getting tails is 50%.

There is a stopping criterion put in place. The stopping rule determines the sample space, therefore knowledge about it is essential for frequentist inference. For example, with the coin toss a frequentist approach may repeat the test 2000 times, or until it's landed on 300 tails. Researchers don’t typically repeat tests this amount of time.

People that have a Frequentist mindset, view and treat probability the same as frequencies. Their probability depends on something happening if it were to be infinitely repeated.

From a frequentist's point of view, the parameters you use to estimate your population are assumed to be fixed. There’s a single true parameter that you will estimate and is not modeled as a probability distribution. When new data is available, you will use it to perform statistical tests and make probabilities about the data.

The most popular computation in frequentist statistics is the p-value, a statistical measurement used to validate your hypotheses. It describes how likely you are to have found a particular set of observations if the null hypothesis (no statistical relationship) is correct.

The shaded blue area in the image below shows the p-value, the probability of an observed result occurring by chance.

Bayesian vs Frequentist Statistics in Data Science
Image by Author How Does it Apply to Data Science?

Statistics is a huge part of Data Science, and if you’re part of that world — you have come across Bayes’ Theorem, p-value, and other statistical tests. It benefits you as a Data Scientist or someone who works with data to have a good understanding of statistical analysis and the tools out there. There may be a time that you will require them.

Within your team, as you are discussing projects and your next steps — you will start to see who has a Bayesian mindset and who has a Frequentist mindset. Data Scientists will work on probabilistic forecasting which combines residual variance with estimated uncertainty. This is specifically a Bayesian framework. However, it doesn’t rule out some experts wanting to use a frequentist approach.

Depending on the approach you take reflects on the statistical methods you choose. A lot of the fundamentals of data science are built on Bayesian statistics, and some even view frequentist approaches to be a subset of Bayesian theory.

However, when it comes to data science, your focus is on the problem at hand. Many data scientists choose their models based on the problem they are trying to solve. The upper hand that Bayesian approaches have is that in the world of data science, having specific knowledge about the problem is always an advantage.

Bayesian methods are known to be faster, interpretable, user-centered, and have a more intuitive approach to analysis.

I will go into these further below and the differences between the two.

Faster Learning

A Bayesian approach starts with an initial belief, which is backed by gathering evidence. This results in faster learning as you have evidence to support your statement.

A Frequentist approach bases their opinions on facts obtained from the data. Although they have had a look at the data, there has not been any analysis performed to ensure this is evidence. There are no calculations of the probability to back the hypothesis.

Interpretable

Bayesian methods have a variety of flexible models, allowing them to be applied to complex statistical problems. This allows for Bayesian methods to be more easily interpretable.

Frequentist methods are unfortunately not that flexible and typically fail.

User-centered

The two methods have different approaches. The Bayesian method allows for different studies and questions to be included in the project conversation. There is a focus on probable effect sizes.

Whereas, frequentist methods limitate this possibility as it focuses on uncertain significance.

Bayesian vs. Frequentist Summary

Attributes: Bayesian: Frequentist:
What is it? Probability distribution around the parameters Parameters are fixed and a single point
What does it question? Given the data, what is the probability of the hypothesis? Is the hypothesis true or false?
What does it require? Prior knowledge/information and any dataset. A stopping criterion
What does it output? A for or against probability about the hypothesis. point estimate (p-value)
Main advantage Backed up with evidence and can apply new information They are simple and easy to use, and does not need prior knowledge
Main disadvantage Requires advanced statistics Highly dependent on the sample size, and only give a yes or no output
When should I use it? Limited your data when you have priors

Uses more computing power

With a large amount of data

Conclusion

I hope this blog has given you a better understanding of the difference between Bayesian approaches and Frequentist approaches. There has been a lot of going back and forth between the two, and if one even exists without the other. My advice is to stick to what makes you feel comfortable and how your brain breaks things down through your personal logic.

If you want a deeper dive, where you can apply your skills and knowledge, I would recommend: Statistics Crash Course for Beginners: Theory and Applications of Frequentist and Bayesian Statistics Using Python
Nisha Arya is a Data Scientist, Freelance Technical Writer and Community Manager at KDnuggets. She is particularly interested in providing Data Science career advice or tutorials and theory based knowledge around Data Science. She also wishes to explore the different ways Artificial Intelligence is/can benefit the longevity of human life. A keen learner, seeking to broaden her tech knowledge and writing skills, whilst helping guide others.

More On This Topic

  • KDnuggets News, July 6: 12 Essential Data Science VSCode Extensions;…
  • Statistics and Probability for Data Science
  • The 8 Basic Statistics Concepts for Data Science
  • 3 Free Statistics Courses for Data Science
  • 5 Free Books to Learn Statistics for Data Science
  • Data Science, Statistics and Machine Learning Dictionary

Get a lifetime of amazing content generation for only $40

Write Bot on a laptop.
Image: StackCommerce

It’s now an established fact that fresh content can improve your company’s search engine rankings. So, it’s great that technology has finally developed artificial intelligence that can help you generate it without breaking the bank. Now new users can create content 100 times faster with a Lifetime Pro Subscription to Write Bot™ Harness the Power of AI Content Creation for just $39.99.

Unlike some other AI content generators, Write Bot uses natural language processing techniques and machine learning algorithms to generate content that mimics human writing. That means instead of having to struggle for a significant time, you can now produce amazing content in seconds with reduced human errors.

Write Bot™ isn’t limited to just creating copy, either. It can also help you with blog ideas, summarizing information, translations and ads, plus product, video and meta descriptions, or titles.

Best of all, the program is ridiculously easy to use. First, you choose what type of content you want to create, then you simply fill in the blanks with as many details as you please. Write Bot™ will take that base information and use it to generate the content you need in only a matter of seconds.

The content will be delivered completely ready to use. However, if you would prefer more detail or you’re just not completely satisfied with the first result, you only need to fill in blank spaces with more details and new content will be generated.

Even after you get the content you’re looking for, it can still be further polished, if you like. There are text editing tools that allow you to tweak documents with any adjustments and additions you see fit.

There’s no denying that streamlining your personal and professional life with the latest tools and services can make you more productive.

Get a Lifetime Pro Subscription to Write Bot™ Harness the Power of AI Content Creation now while it’s available to new users for only $39.99.

Prices and availability are subject to change.

Person using a laptop computer.

Daily Tech Insider Newsletter

Stay up to date on the latest in technology with Daily Tech Insider. We bring you news on industry-leading companies, products, and people, as well as highlighted articles, downloads, and top resources. You’ll receive primers on hot tech topics that will help you stay ahead of the game.

Delivered Weekdays Sign up today

The ‘ChatGPT Dilemma’ of Job Creation and Destruction

The ‘ChatGPT Dilemma’ of Job Creation and Destruction

The alarm over AI taking away jobs went off when ChatGPT was able to perform tasks like a regular employee. Notwithstanding the fear, people have been trying to understand the technology, adapt to it, and implement it in their workings. In this wake, people have been able to use ChatGPT to earn money on the side as well.

Last month, under r/ChatGPT, a Reddit user narrated his story of how he has achieved an “extremely high interview invitation rate” by using ChatGPT for applying for jobs. The user would upload his resume on ChatGPT, ask specifically tailored roles for his experience, ask the chatbot to outstandingly answer questions for the role, and thus make his application “amazing” according to the feedback he got.

Another user tells a similar story where he was applying for jobs for weeks but could not get to an interview. Once he took help from ChatGPT to write a cover letter for a job and enhance his resume, he got back an interview invite the same day.

In both cases, it is clear that ChatGPT can get you interviews for jobs. But, the first story also narrates how the person could not get the job because he got “anxious during interviews”. Could it be that the cover letter and resumes got selected because a lot of companies are actually running ChatGPT or similar products in the HR department? “What a sound cover letter!” is what ChatGPT would say to such AI-generated cover letters.

On the other hand, these AI tools are threatening to wipe out millions of jobs around the globe. According to a Goldman Sachs report on the potential of generative AI, 300 million jobs could be automated by AI. The same report also reads that automation of any task historically has offset this loss of jobs by creating new jobs and occupations, resulting in long-run employment growth. Though, the specific job roles have not been specified.

OpenAI is also aware of the disruption in the job market that its technology has, and will eventually cause, and also published a paper talking about the same in March. We are already seeing companies like IBM freezing hiring for jobs that could be replaced by AI. On the other hand, PwC predicted that AI will create as many jobs as it will displace, but there would be different “winners” and “losers” by industry sectors. But this was back in 2018.

There are new jobs in the market such as prompt engineering, that provide a salary higher than even full-stack developers, but the number of jobs that are getting displaced and a number of people getting laid off is higher than the ones that are created.

AI Money Generator

Two months back, Jackson Greathouse Fall, a writer and a brand designer, used ChatGPT to guide him to becoming a millionaire. After telling the chatbot, “You have $100, and your goal is to turn that into as much money as possible in the shortest time possible, without doing anything illegal”, Fall got instructions on how to launch a business called Green Gadget Guru, offering products to enable people to live sustainably.

I gave GPT-4 a budget of $100 and told it to make as much money as possible.
I'm acting as its human liaison, buying anything it says to.
Do you think it'll be able to make smart investments and build an online business?
Follow along 👀 pic.twitter.com/zu4nvgibiK

— Jackson Greathouse Fall (@jacksonfall) March 15, 2023

After this, he managed to raise $1,378 in just one day, got investments, and his company got a valuation of $25,000.

Apart from running a business, ChatGPT has enabled users to acquire multiple full-time jobs simultaneously – referred to as “overemployment”. The trend started during COVID, and now ChatGPT has made it possible for a lot more people. The funny, and at the same time concerning, part is that their employers have no idea!

One said, “ChatGPT does 80% of my job,” which has allowed him to apply for multiple positions, and finish their work in half the usual required time. Moreover, there is a complete Reddit thread where “overemployed” people talk about how they are landing different jobs, working on more than two jobs at the same time, and looking for even more opportunities.

Is AI Stealing My Salary?

One might be quick to celebrate and admire these “overemployed” workers getting chunks of money through different jobs. But if we look at this phenomenon a little critically, the case gets a little concerning. Roles that would have been filled by another person are now getting filled by a single person using ChatGPT and similar AI products. Seems unfair for people not trained enough with AI.

This is what Richard Baldwin said at the 2023 World Economic Forum. “AI won’t take your job, it’s somebody using AI that will take your job.” There is no doubt that AI is displacing jobs globally and people from various sectors are increasingly revolting against its use. It would have not been possible for the “overemployed” people to get these jobs if ChatGPT did not assist them.

Moreover, OpenAI’s ChatGPT has not just caused unemployment indirectly, but also directly. According to another Reddit post, the user explains how ChatGPT is slowly taking his job away. He has been working as an ML engineer at a company and was building conversational agents, similar to ChatGPT. But after OpenAI released ChatGPT API, the company adopted it, replaced the already built ML models, and started planning to remove the ML team entirely to cut costs.

There is no doubt that a lot of jobs are still safe from this AI phenomenon, but a lot of them are not. Coding too is slowly facing the brunt of its own development, with a lot of auto code generators shrinking a 10-people job to just one human and AI partnership.

Apart from prompt engineering, several other jobs are being introduced that now consider ChatGPT expertise as a relevant skill. It is now evident that the time to train humans to work along with AI is here. If you don’t do it right now, someone else will.

The post The ‘ChatGPT Dilemma’ of Job Creation and Destruction appeared first on Analytics India Magazine.

After Apple, Amazon, Google and Microsoft, Meta Now Builds Its Own AI Chips

Meta recently introduced its first in-house silicon chip designed for AI workloads, called MTIA (meta training and inference accelerator). This AI chip is a custom-designed ASIC built with next-generation recommendation model requirements in mind. The accelerator is built on TSMC’s 7nm process at a 25W TDP and provides 102.4 TOPS at INT8 precision and 51.2 TFLOPS at FP16 precision.

Meta believes that by having it in-house, they can optimise every single nanometer of the chip so they don’t waste any part of the architecture, in terms of area, alongside bringing down the power for the chip, thereby reducing the cost for the ASIC.

The company said that the benefit of building its own ASICs is that they now have access to real workloads that are used by its ads team and other groups at Meta, where it can execute performance analysis on its design, fine-tune, and tweak all the parameters that go into high-performance solutions by incorporating the silicon with the software environment.

With this, the team at Meta is able to speed up the software development cycles and deploy the models at a much faster pace and help to improve the user experience.

Powered by PyTorch

Meta said that it has also developed a compiler technology that runs under the PyTorch environment. “MTIA executes on those workloads with the highest performance and lowest power,” added the team, saying that it achieved the efficiencies compared to today’s GPUs.

Further, it is that its new chip was designed in collaboration with a lot of cross-functional teams that care about the chip, the board, the system, the rack, the data center, their constraints and optimisations as well as the software parts of IT firmware, compiler, application level, runtimes, PyTorch, models, application models. “So, all of this has come together to put a system that is optimised and tailored for Meta’s workloads, and MITIA is just one piece of it,” said Olivia Wu, design lead at MTIA.

She believes that by having an in-house design, the team is able to take control of their destiny and are able to specify the architecture of the design and match it with the roadmap for the workload that is coming out in the future.

Meta vs the world

Meta isn’t the only one that is working on developing in-house AI chips. Recently, reports emerged that Microsoft has been working on its own in-house AI processor, called Athena, in partnership with the chip company AMD. Read: After Google, Microsoft Targets NVIDIA

Besides Microsoft and Meta, Google, Apple and Amazon have also been working on developing in-house AI chips. For instance, Google has built a supercomputer to train its models with its TPUs (Tensor Processing Units). Apple has been working on M1 and M2 chips for quite some time now. Amazon, on the other hand, is working on Trainium and Inferentia processor architectures.

The post After Apple, Amazon, Google and Microsoft, Meta Now Builds Its Own AI Chips appeared first on Analytics India Magazine.

Master Manipulator Altman Wants to be the AI Showrunner

This week, OpenAI CEO Sam Altman and two other AI experts testified before the US Congress to discuss the regulation of AI. The experts were testifying before a Senate subcommittee in a hearing called ‘Oversight of AI: Rules for Artificial Intelligence’.

While AI regulation has been contested globally, the testimony saw a consensus on two sides of the political spectrum in the Senate to establish a new regulatory body to oversee the industry. Unlike previous congressional hearings with tech executives like Mark Zuckerberg, Altman received a positive response from lawmakers, who shared his concerns about the potential risks of AI.

The testimony marked Altman’s christening as the foremost figure in the field of AI, as Altman himself stressed during the testimony—and the Senators seemed to have bought it.

Altman advocated for new AI laws and regulations to address the potential unintended consequences of powerful AI models. He proposed the establishment of a regulatory body which would accord licences to organisations for the distribution and potentially the creation of large models. That proposed stand-alone agency could in theory, also revoke those same licences from companies deemed to have behaved badly.

However, Altman advocated for the preservation of open-source initiatives and cautioned against stifling smaller startups. He proposed that licensing should only become mandatory once the model exceeds a specific size.

He offered to help and said that he could provide the Senate with an extensive list of crucial elements including specific criteria that a model must meet before its deployment. It should be noted that if this is considered as it is, it would choke any chance of trial and error in the open source community or for models by other competitors—which OpenAI has ample.

Others in the community also believe that OpenAI recognises the potential of open-source projects surpassing their own, if not restricted. On the other hand, according to reports, OpenAI is reportedly planning to release an open-source AI model.

The question that arises is: Why is OpenAI considering a return to the “open” approach? One could argue that this is for the optics and they don’t want to be judged as a behemoth, and rather be perceived as a small firm leading this wave – one that must be counselled with before making any moves.

Chess, not checkers

Nevertheless, Sam Altman’s actions during the Senate hearing were perceived by some as manipulative and self-serving. Critics believe that he prioritised OpenAI’s interests over the AI community, positioning the company as a prominent player in the Senate’s discussions. Altman’s performance led to him being praised and perceived as influential, while others, such as Gary Marcus, were seen as being overshadowed or ineffective in their questioning. Many argued that this is an example of established players attempting to control a technology through legal means and gaining influence over regulations. The smoothness and strategic nature of Altman’s approach have been compared to the tactics of a cunning Bond villain.

Altman at times during the testimony put his own innovation to question, to ensure he falls on the good side. He dabbled between the good and bad just enough.

He said that OpenAI expects significant economic impacts from AI, including increased productivity, job creation, transformation, and displacement. They want to collaborate with the US government and that the firm is investing in research to mitigate future economic disruptions caused by AI without directly shaping policies.

Two birds, one stone

At one time, it seemed like the world was against OpenAI with linguists like Noam Chomsky completely writing it off, while others thought it was too problematic and could possibly bring armageddon because the technology was advancing at an alarming rate.

OpenAI co-founder Elon Musk rode the wave and became one of the signatories of the petition to decelerate giant AI experiments and pause the training of models more powerful than GPT-4. Gary Marcus and AI visionary Yoshua Bengio were amongst the prominent signatories of the petition.

On Musk’s part it appeared like a simple competitive move to get ahead. The narrative also seemed to stand because interestingly reports suggested that Musk had roped in DeepMind researcher Igor Babuschkin to work on a rival to ChatGPT.

On the other hand, countries were banning it left right and centre. It was banned in Italy for privacy violations – France, Spain, and Germany are investigating the company’s compliance with the EU’s General Data Protection Regulation or GDPR.

So, the route which Altman took during the testimony seemed to be a very calculated manoeuvre to position himself to become the King’s advisor to not only neutralise competition – but to also ensure that OpenAI falls on the good end of AI regulation when it kicks in the United States – its biggest market.

The post Master Manipulator Altman Wants to be the AI Showrunner appeared first on Analytics India Magazine.

Now You Can Talk to ChatGPT on Your iPhone

Chatgpt app ios

In a surprising new development, OpenAI has released a ChatGPT app for iOS users. The on-the-go app syncs users’ conversations, and supports voice input, alongside other new improvements. Sadly, Android users will have to wait. “P.S. Andriod users, you’re next!,” said OpenAI, in its blog post, saying that ChatGPT will be coming to Google’s devices soon.

Just launched the ChatGPT iOS app! https://t.co/QC2Ec7Jshs
Now in the US, world soon. Android next.

— Mira Murati (@miramurati) May 18, 2023

OpenAI said that the app is free to use and syncs users’ history across devices. It has also integrated Whisper, an open-source, multilingual speech recognition system, which enables voice input. The company said that the ChatGPT Plus subscribers will be getting exclusive access to GPT-4’s capabilities, alongside early access to new features and faster response times, all on iOS.

Download ChatGPT app here.

Currently, the company said that it is planning to roll out in the US, and slowly will be expanding it to other countries in the coming weeks.

Google, the creator of Bard, is catching up with ChatGPT once again. The company at the recent I/O conference announced a series of updates for Bard, including the integration with Search, PaLM 2 update, etc. But, it is yet to launch a mobile app. Again, who knows? They might still have a chance to launch a Bard app on Android ahead of OpenAI.

Regulatory Challenges

This new development comes against the backdrop of regulatory challenges that many AI companies are facing across geographies, including the recent US Senate discussion, where the OpenAI chief shared his concerns about the technology, and how government and companies should come together to regulate them. This move by Altman has been raising eyebrows in the open-source community, calling this a manipulative move, culling new innovations and using them for personal benefit. Read: Has OpenAI Lost the Open Race?
https://www.youtube.com/watch?v=D59T1ldw1gg

On the other hand, the European Parliament has implemented stringent AI regulations. This includes bans on biometric surveillance, emotion recognition, and predictive policing AI systems, alongside tailor-made regimes for general-purpose AI and foundational models like GPT, and others. India, on the other hand, has nothing in place yet. Read: India Backs Off on AI Regulation, But Why?

However, the Indian government is now waking up and is considering a regulatory framework for AI-enabled platforms like ChatGPT, including areas related to algorithmic bias and copyrights.

The post Now You Can Talk to ChatGPT on Your iPhone appeared first on Analytics India Magazine.