Meta’s Threads Crosses 10M Signups in 7 hours, ChatGPT had 1+M Users in 7 days

Meta launched its Twitter alternative ‘Threads’ today, and the app broke all previous records. The new offering garnered a humongous 10 million sign ups in just the initial 7 hours, while OpenAIs ChatGPT had 1+ million users in its first 7 days. This must be a little ‘pat on the back moment’ for Zuck, and company, who has been competing with OpenAI and others.

It’s super easy to sign up to threads, you just need to download the new app and can access it through your existing Instagram account. While ChatGPT’s humongous popularity could be because it was one of its kind large language model which people could access for free—Threads owes its success to Instagram’s existing user base.

And now that the buzz around the Musk vs Zuckerberg cage match has subsided—the rivalry has moved on to a new frontier. Hours after launching Threads, Meta CEO Mark Zuckerberg took to twitter for the first time in over a decade to address the elephant in the room. Poking fun at the situation, Zuckerberg posted the iconic Spider Man facing off meme in reference to the Threads announcement.

Musk did not hold off either, he then flicked a quick jab(imaginary of course, because the match was never going to happen) in Zuck’s direction as he implied that the app is nothing but a copy-paste, while reacting to a tweet.

Twitter has been reeling from some extreme measures taken by Elon Musk—first to monetise the platform, then to save all the data on the platform from being scraped off to build LLMs by limiting the number of posts visible to unpaid accounts on Twitter to 600. However, Zuck seems to have struck gold. Ex Twitter CEO Jack Dorsey’s Twitter alternative Bluesky also apparently saw a record high traffic thanks to Musk’s unpopular opinion. Mastodon built on the ActivityPub protocol also gained from it.

While, Instagram head Adam Mosseri said that Meta’s Threads won’t support ActivityPub on launch, in the future it might be built on the protocol allowing one to follow and interact with people on other fediverse platforms, such as Mastodon.

“Soon, you’ll be able to follow and interact with people on other fediverse platforms, such as Mastodon. They can also find people on Threads using full usernames, such as @mosseri@threads.net,” Mosseri said—-noting that the team couldn’t complete work on time to support fediverse at launch.

However, there’s a catch, you can only deactivate your threads profile once you’ve signed up. And to permanently delete your profile and associated data you will need to delete your Instagram account.

The post Meta’s Threads Crosses 10M Signups in 7 hours, ChatGPT had 1+M Users in 7 days appeared first on Analytics India Magazine.

How Generative AI Hackathon is Driving Innovation at SAP Labs India 

Globally, softwares companies across are exploring use cases and ways to leverage large language models (LLMs) to improve their offerings. Germany-based SAP, which is the world’s leading enterprise resource planning (ERP) software vendor, is also leveraging the technology in multiple ways.
“We have been leveraging generative AI capabilities for quite a bit of time,” said Rahul Lodhe, senior director, SAP Artificial Intelligence, in an interview with AIM.

Prior to the launch of ChatGPT, SAP was already using generative AI in the auditing space for validating contracts and legal notices in their applications. “We had deployed innovative models internally, leveraging Google’s LLMs which were already there,” said Lodhe.

In this exclusive interaction, Lodhe tells AIM that at SAP Labs India in Bengaluru, a ‘Generative AI interest group’ was launched in February 2023. The initiative was launched as a part of the Leader Together Data Science forum to encourage and motivate AI enthusiasts and curious minds across Sap Labs India, to explore various generative AI/foundational models and the power of large language models.

“More than 2500 colleagues joined the interest group across different roles and domains from developers, product owners, to UX/ UA designers. Interestingly, the Generative AI interest group launched a generative AI hackathon called ‘Idea2Impact’ in March 2023 across SAP Labs India,” Lodhe said.

The objective of the hackathon was to find generative AI use cases in product innovation, developer efficacy and social impact, among other things. “We received an overwhelming response with 179 ideas submitted by over 550 participants.Out of which 40 ideas made it to the Proof of Concept (POC) stage and 11 of them were showcased in the Development to Community (DCOM) event to all employees. In fact, one of the ideas also resulted in a patent application.”

Integrating generative AI across core domains

SAP’s generative AI strategy is in alignment with its overall SAP AI strategy, i.e., built for business and deeply embedded into our business applications and end-to-end processes.“We’re applying generative AI across SAP’s suite of applications,” said Lodhe.

For example, the ERP leader is applying generative AI to SAP Transportation Management. “This capability combines the power of SAP’s document processing solution with generative AI, helping SAP customers in automotive and manufacturing industries to automate the delivery of goods (e.g., freight orders) and avoid mundane, paper-based work,” explained Lodhe.

Generative AI enables the solution to automate manual checks of goods receipts and delivery notes, reducing time spent for delivery note processing from 10 minutes to 3 minutes per truck, resulting in 11% of savings. “With up to 70% instant accuracy, generative AI accelerates time to value without requiring customers to invest in dedicated training data or document templates,” he added.

Not only that, SAP is revolutionising multiple domains by integrating generative AI capabilities. From SAP SuccessFactors and to Sales and Services, Marketing and Communication, SAP Analytics Cloud, SAP Signavio Process Manager, and the SAP Digital Assistant, these advancements are transforming various aspects of the business landscape.

“In a first of many use cases, SAP and Microsoft will work together on new experiences that streamline recruiting and employee development processes in SAP SuccessFactors, including the generation of job descriptions and interview questions leveraging GPT models provided via the Microsoft Azure OpenAI Service.”

Nonetheless, SAP is constantly evaluating new partnership opportunities based on costs, enterprise readiness, economies of scale, addressable market, data privacy, and more. “This is being done so that we can build upon a rich ecosystem of partner technologies and select what fits best for SAP applications and provides most customer value,” said Lodhe.

SAP’s generative AI strategy

SAP’s generative AI strategy is in alignment with its overall SAP AI strategy, i.e., built for business and deeply embedded into its business applications and end-to-end processes. “We leverage an ecosystem of strategic partnerships around general-purpose AI tooling. We leverage SAP’s own AI technology to deliver Business AI that is built on business data and SAP’s deep industry and process expertise,” said Lodhe.

“We envision this to transform the user experience of SAP solutions massively – so that you can define the desired business outcomes, and SAP systems generate insights and optimizations by connecting the relevant business knowledge and process data,” he added.

Lodhe believes generative AI can make businesses around the world more productive, more efficient and more resilient and has the potential to delight SAP’s end users in a totally new dimension so that they can achieve more and focus on what matters most. “Furthermore, we believe that AI is a useful tool but that humans continue to play a key role in the decision and reasoning processes of enterprises. Therefore, we intentionally design our AI solutions to keep humans in the loop to carefully review and approve AI-generated information,” he concluded.

The post How Generative AI Hackathon is Driving Innovation at SAP Labs India appeared first on Analytics India Magazine.

ChatGPT Sees Decline in Users

ChatGPT Users Decline

According to SimilarWeb, global desktop and mobile traffic to the ChatGPT website witnessed a decline of 9.7% in June compared to May, while unique visitors to the website dropped by 5.7%. Additionally, the data reveals an 8.5% decrease in the amount of time visitors spent on the website.

Similarweb’s Senior Insights Manager David Carr, said that the declining traffic suggests a waning interest in the chatbot’s novelty. RBC Capital Markets analyst Rishi Jaluria, on the other hand, interprets the data as an indication of increased demand for generative AI with real-time information, according to Reuters.

Sarah Hindlian-Bowler, head of Technology Research Americas at Macquarie, attributes the decline in usage to the challenges of scaling up rapidly from zero to 100 million users. The infrastructure strain may have led to reduced accuracy, requiring adjustments to the model and compliance considerations.

ChatGPT gained widespread popularity and reached 100 million monthly active users in January, just two months after its launch. It quickly became the fastest-growing consumer application ever, boasting over 1.5 billion monthly visits and ranking among the top 20 websites worldwide.

Notably, ChatGPT surpassed Microsoft’s Bing, a search engine leveraging OpenAI’s technology. In recent months, several competitors, including Google’s Bard chatbot, have entered the market. Microsoft’s Bing also offers a chatbot powered by OpenAI to users at no cost.

In May, OpenAI launched the ChatGPT app for iOS, potentially diverting some traffic from its website. Additionally, the decrease in usage could be linked to the summer break for schools, as fewer students seek homework assistance. Within two weeks after releasing the Browse with Bing feature on the app, OpenAI decided to shut it down.

Data.ai reports that the ChatGPT app has been downloaded over 17 million times globally on iOS as of July 4. The U.S. market has shown consistent popularity, with an average of 530,000 weekly downloads during the first six full weeks of availability.

The recent growth slowdown may help mitigate the substantial costs associated with running ChatGPT, which demands significant computing power to respond to user queries. OpenAI CEO Sam Altman has described the expenses involved as “eye-watering.”

The post ChatGPT Sees Decline in Users appeared first on Analytics India Magazine.

Meta’s Threads won’t Launch in EU over Privacy Concerns

Meta’s Twitter rival app won’t launch in the EU yet on the concerns of data privacy, TechCrunch reported on Wednesday.

In the United States, the platform informs users that it will collect a wide range of personal information, such as health and financial data, browsing history, location, purchases, contacts, search history, and sensitive information.

However, the EU under DMA has prohibited Meta from introducing advertising services on WhatsApp that utilize data from Facebook or Instagram. The new platform is created to collect information from Instagram, specifically regarding user behavior and interactions with advertisements.

Meta is eagerly awaiting additional guidance on the Digital Markets Act, which are new rules in the European Union that govern how big online platforms can use their market influence, Bloomberg reported, citing a person familiar with the matter. The European Union is currently discussing the regulations with the company and is expected to give guidance in September, the report added.

On Tuesday seven companies including Meta said that they meet the criteria of ‘gatekeeper’ which means they will have to meet EU’s tougher rules under Digital Markets Act. Companies violating DMA can be fined up to 10% of annual global turnover.

Threads, released worldwide on Thursday is touted as a rival to Twitter. In the first two hours it passed 2 million sign ups, said Mark Zuckerberg on Threads post.

The post Meta’s Threads won’t Launch in EU over Privacy Concerns appeared first on Analytics India Magazine.

 Is ChatGPT Good at Coding?

Until two years ago, schools and colleges were toiling hard to teach the students CC++ languages from scratch by printing ‘Hello World’ but now it’s a thing of the past. Following the launch of ChatGPT, English emerged as the new programming language. Lately, a meme has been making the rounds on the internet suggesting that codes generated by ChatGPT take longer for the developers to debug.

On Twitter too, several users expressed disappointment in how difficult it has become to debug the code created by ChatGPT. One of the users on Twitter said, “ChatGPT is good for code generation, but it generates codes that require debugging, so blindly using it would be a waste of time.”

After using ChatGPT for so long in development, I can say
-> Since chatgpt has sessions so sometimes asking your query in a new session might help
-> Chatgpt is good for code generation, but it also generates debugging required code so blindly using it will be time wasting only pic.twitter.com/ozb6rPYWwm

— Parth Verma (@v_parth7) July 5, 2023

However, is this reason enough to stop them from using ChatGPT for coding? The answer is a big no, because coding and thinking simultaneously puts a break on your chain of thoughts. Even though it takes longer, people would still use ChatGPT for coding because it allows them to be creative, solve problems, and discover new coding ideas. With ChatGPT, our critical thinking ability is not limited by the speed at which we can convert thoughts into codes.

GPT 3.5 vs GPT4

It is a fact that even the most expert human programmer cannot always get the program right on the first try. Large language models (LLMs) have proven to be highly skilled at generating codes, but still face difficulties when it comes to complex programming tasks. To overcome these challenges, researchers have explored a technique called self-repair, where the model can identify and correct errors in its own code. This approach has gained popularity as it helps improve the performance of LLMs in programming scenarios.

A research paper, called ‘Demystifying GPT Self-Repair for Code Generation’, quantifies GPT-4’s self-debug capabilities against other LLMs. According to the paper, GPT-4 has an extremely useful and emerging ability that is stronger than any other model — self-debug.

One of the key findings from the paper was that GPT-3.5 can write much better code given GPT-4’s feedback. GPT-4’s exceptional ability to self-repair stems from its remarkable feedback mechanism. Unlike other models, GPT-4 possesses a unique capacity for effective self-reflection, allowing it to identify and rectify issues within code. This distinguishing feature sets it apart from its counterparts in the AI landscape.

Notably, the feedback model and the code generation model in GPT-4 do not necessarily have to be the same. For example, you can debug the code created by GPT-3.5 using GPT-4. In this case, GPT-3.5 acts as a code generation model and GPT-4 acts as a feedback model. This approach empowers GPT-4 to continuously improve and refine its coding capabilities, making it a standout solution in the field of AI-driven programming.

In an interesting insight from the research, it was seen that GPT-4’s self-generated feedback, along with the feedback provided by an experienced programmer, increased the number of repaired programs. It means human critical-thinking still needs to be a part of the debugging process. AI can assist you with debugging, but in the end, it all boils down to your skills.

What’s next?

The code created by ChatGPT will be as efficient as the prompt. If your prompt is not up to the mark, you will not be able to produce the desired output. Prompting is mostly just trial-and-error, i.e., if one prompt doesn’t work, you try another one. Going ahead, there is a possibility that like coding, you might even not need to create prompts on your own. Developers are coming up with open source models that can be integrated on top of the ChatGPT API that would dish out the best possible prompts for you.

Introducing `gpt-prompt-engineer` ✍
An agent that creates optimal GPT prompts.
Just describe the task, and a chain of AI systems will:
– Generate many possible prompts
– Test them in a ranked tournament
– Return the best prompt
And it's open-source: https://t.co/nrivU2BWmn pic.twitter.com/rcnlJ5g5ZN

— Matt Shumer (@mattshumer_) July 4, 2023

An example of this AI agent is ‘GPT prompt engineer’. It is a constraint agent, which means that its behaviour is highly-controlled, leading to better results than open-ended agents. It chains together lots of GPT-4 and GPT-3.5-Turbo calls that work together to find the best possible prompt. Often, it has even outperformed the prompts written by humans.

The post Is ChatGPT Good at Coding? appeared first on Analytics India Magazine.

India to Manufacture its own Microchips by the End of 2024 

Open Source Opens Funds for Semiconductor in India

India will commence construction on its inaugural semiconductor assembly plant and aims to start manufacturing its own microchips within the country by the end of 2024, the Financial Times reported on Wednesday.

Ashwini Vaishnaw, India’s minister of electronics and information technology, said that Micron Technology, the company establishing a chip assembly and testing facility in Gujarat, is scheduled to commence construction on the $2.75 billion project in August. The project, which has received government support, is set to move forward, the report added.

In the interview with Financial Times he said “This is the fastest for any country to set up a new industry. “Eighteen months is when we have targeted [the first] production to come out of this factory — that is, December of ‘24.” he added.

On Monday, Microchip Technology inaugurated its newly established research and development (R&D) center in Hyderabad. The company also revealed its intention to expand the workforce at the facility, aiming to double its headcount to 1,000 in the coming years. This new office is a crucial component of Microchip Technology’s long-term initiative, which involves investing $300 million over several years.

Earlier, the Indian government decided to accept new applications to build Semiconductor Fabs and Display Fabs in the country, starting from June 1, 2023, through the Modified Semicon India Programme with an aim for development of semiconductors and display manufacturing ecosystem in India.

The post India to Manufacture its own Microchips by the End of 2024 appeared first on Analytics India Magazine.

Elon Musk vs Mark Zuckerberg: The Real Tech Battle Begins 

Meta chief Mark Zuckerberg recently took inspiration from Twitter boss Elon Musk’s playbook and unveiled paid subscriptions for verified accounts. But will he now take a leap towards safeguarding valuable data on this fresh platform?

Meta recently announced that it will be launching its Instagram Threads pretty soon. This comes just a few days after Musk announced a limit on the number of posts a user can read per day. The explanation that Musk gave on putting a cap on viewing was “address extreme levels of data scraping and system manipulation”. There is a speculation that Musk might plan to build its own ChatGPT rival using Twitter data – which some are referring to as TruthGPT.

So, the question is, will Zuckerberg follow suit to limit the number of post users can view on Thread to save the valuable data?

In recent times, tech giants have aspired to create their own generative AI chatbot similar to Open AI’s ChatGPT. As they say, data is gold, and for any company that aspires to create any form of chatbot, it needs data to train its LLM.

What we need is TruthGPT

— Elon Musk (@elonmusk) February 17, 2023

Currently, Google probably has the highest amount of data as compared to any other competitor out there. Google DeepMind chief Demis Hassabis, recently claimed that the company’s next LLM project is going to eclipse ChatGPT. This huge claim is about the project called Gemini. Interestingly, DeepMind, with Google, has something that no one else has – YouTube. While Twitter is filled with textual data, YouTube is a gold mine for visual, audio, and textual data in almost every single language on Earth.

What About Meta?

One fascinating fact about Meta is that it does not use the data it collects over its applications to train its LLM LLaMa. According to research paper “LLaMA: Open and Efficient Foundation Language Models”, LLaMA is trained on CommonCrawl, GitHub, Wikipedia, and books.

However, there is a possibility that in the future, Meta might train its LLM models (LLaMA and beyond) from Facebook and Instagram data. Again, there is also possibility that they won’t have to do that since they are betting big on self-supervised learning (SSL) and world models, which limits the use of RLHF (reinforcement learning with human feedback), and reduces the reliance on training models on user data.

Recently, Meta collaborated with Carnegie Mellon University, and the University of Southern California to make LIMA. Unlike ChatGPT and Bard, where RLHF is considered crucial, this model may be using self-supervised learning, which has been strongly advocated by Yann LeCun for a long time.

Meta Bets Big on Open AI

Instead of making LLaMA, the chatbot, available to the general public, Meta chose a different approach. It released it as an open-source package, which means that members of the AI community can request access to it. Whether the future of AI is open or closed source in itself is a different debate but by releasing LLaMA, Meta took a stand of democratising access to large language models, helping researchers advance their work in this subfield of AI.

One of the possibilities of why Meta is backing open source is that it is nowhere near Google or OpenAI. It gives them the opportunity to correct the flaws and loopholes in LLaMA and keep an eye on how the community is bettering it. In the recent podcast with Lex Fridman, Zuckerberg said, “The stage we are in right now, the equities balance strongly in my view towards doing this more openly.”

Zuckerberg has shifted his focus to generative AI. He said that Meta will bring LLM-powered AI agents to Messenger and WhatsApp first, but explore additional opportunities across its family of apps, consumer products and into the metaverse.

Only time will tell if Meta will venture into its own chatbot. Speaking of generative AI in a Facebook post, Zuckerberg said, “We have a lot of foundational work to do before getting to the really futuristic experiences, but I’m excited about all of the new things we’ll build along the way”.

IT would be very interesting to see how Meta leverages Threads’s data besides selling it to the advertisers. Only time will tell if Meta will restrict data scraping. Also, the Threads has been marketed as an Instagram app by the company, where there are high chances of them going the Twitter way, challenging Musk in the real tech battle.

The post Elon Musk vs Mark Zuckerberg: The Real Tech Battle Begins appeared first on Analytics India Magazine.

OpenAI Forms Superalignment, a Dream Team to Tackle Superintelligence Alignment

In OpenAI’s recent blog post, the ChatGPT maker has announced that it is forming a team comprised of skilled machine learning researchers and engineers to address the challenge of aligning superintelligence and have allocated 20%, of its computing resources for the next four years for this. Coheaded by Ilya Sutskever, OpenAI’s co-founder and Chief Scientist, and Jan Leike, the Head of Alignment, the division’s primary focus is to solve the fundamental technical problems that come with superintelligence alignment within a four-year timeframe. The team comprises researchers and engineers from their previous alignment team as well as experts from other departments within the company.

OpenAI intends to share the outcomes of its work widely and considers contributing to the alignment and safety of non-OpenAI models as an important aspect of its mission. The new team’s work complements OpenAI’s ongoing endeavours to enhance the safety of current models like ChatGPT and to address other AI-related risks, including misuse, economic disruption, disinformation, bias and discrimination, addiction and overreliance, among others. While the focus of the new team is on the machine learning challenges associated with aligning superintelligent AI systems with human intent, they actively engage with interdisciplinary experts to ensure their technical solutions encompass broader human and societal concerns.

Superintelligence can address global challenges but carries risks of human disempowerment or extinction. However, current methods of controlling it are insufficient.

OpenAI is Hiring

To make this mission come to fruition, OpenAI is actively hiring research engineers, research scientist, and research managers.

Research Engineer: OpenAI is hiring Research Engineers with an annual salary range of $245,000 to $450,000. The role involves writing performant code for machine learning (ML) training, conducting and analyzing ML experiments, collaborating with a small team, and planning future experiments. Responsibilities also include exploring scalable oversight techniques, studying generalization, managing datasets, investigating reward signal issues, predicting model behaviours, and designing approaches for alignment research. Candidates should be aligned with OpenAI’s mission, have strong engineering skills, curiosity about ML models, and enjoy a fast-paced research environment. ML algorithm implementation and data visualization skills are desirable, and ensuring human control over AI systems is a priority.

Research Scientist: In this role, you’ll develop innovative machine-learning techniques, collaborate with colleagues, and contribute to the company’s research vision. Responsibilities include designing experiments for alignment research, studying generalization, managing datasets, exploring model behaviours, and designing novel approaches. The ideal candidate is aligned with OpenAI’s mission, has a track record of ML innovation, can pursue research independently, and is motivated to address the challenge of aligning AI models. Experience with ML algorithms and data visualization is preferred.

Research Manager: As the research lead, you’ll oversee a team of research scientists and Engineers focused on aligning superintelligence and studying generalization. The role involves planning and executing research projects, mentoring team members, and fostering a diverse and inclusive culture. Leadership experience in research, alignment expertise, and a passion for OpenAI’s mission are desired. The annual salary range is $420,000 – $500,000. Adaptability, a hunger for learning, and a commitment to improving culture and diversity are also important qualities.

This news comes against the backdrop of AI regulation becoming a hot topic in the world, with comparisons made to nuclear war threats. OpenAI CEO Sam Altman also testified before the US Senate about the same.

Additionally, OpenAI also launched a program to fund experiments aimed at democratising AI rules. They will grant $1 million to those who contribute the most to addressing safety issues.

Soon after the world tour, look like OpenAI harbours an awe-inspiring arsenal poised for revelation.

The post OpenAI Forms Superalignment, a Dream Team to Tackle Superintelligence Alignment appeared first on Analytics India Magazine.

KDnuggets News, July 5: A Rotten Data Science Project • 10 AI Chrome Extensions for Data Scientists Cheat Sheet

Data Science Project of Rotten Tomatoes Movie Rating Prediction: First Approach • 10 AI Chrome Extensions for Data Scientists Cheat Sheet • Generate Music From Text Using Google MusicLM • 5 Free Books on Natural Language Processing to Read in 2023 • Stable Diffusion: Basic Intuition Behind Generative AI