While AI has shown potential in various applications, there has been a growing sense of repetition of the same stories about its capabilities. One would imagine that amid the hype around generative AI, companies would be willing to adopt the technology within their works. While that is true, a lot of this hype around generative AI is being driven by investors. Now, CEOs have nothing to do but adopt AI as soon as possible to keep getting the funds.
Recently, Suumit Shah, CEO of Dukaan, announced that he decided to layoff 90% of the company’s support team to replace them with AI. He narrated how integrating generative AI has made his company more productive. While narrating the story of “AI is making workflow easier,” Shah did not mention that perhaps one of the main reasons for the layoff might be cost-cutting; and marketing itself as a generative AI driven startup, a move to attract more ‘AI investors’.
Moreover, even though Shah claimed that the chatbot, Lina, that replaced all the support team was able to speed up the customer interaction process, a lot of people HackerNews criticise how the chatbot is technically poor and thus basically useless for Shah to make to make the claim that it was indeed a good move. “It looks like a rushed and half-baked move by the company to stand in front of the generative AI adoption crowd,” said one of the users.
Announcing the adoption of AI, and demeaning the worth of employees in the same breath, got Shah a lot of backlash. Shah could be criticised for the choice of words while praising the potential of AI, but investors that are indeed the ones pushing CEOs to market themselves as the early adopters of generative AI are to be equally blamed for this.
According to a recent IBM report, 75% of the CEOs believe that adopting generative AI would give them a competitive edge. On the other hand, 66% of the CEOs express that they feel pressured by investors to accelerate AI adoption. It is possible that Shah and his Dukaan must have felt the same heat from investors.
Credits: Vin Vashishta
Staying in the sideline is also not an option
Vin Vashishta, founder and AI advisor at V Squared, recently explained how the success of NVIDIA has driven investors to follow the same strategy. “Investors don’t want AI, they want ROI,” said Vashishta. CEOs definitely are bound to fall into the pressure. It is the common denominator for all the stakeholders in the business including the CEOs and the investors.
On the other hand, putting the resources and funds into AI products that the business can’t monetise is just as bad, “maybe even worse, than sitting on the sidelines,” said Vashishta.
Vashishta points out from the IBM report that out of the four CEOs, the one that does not think that generative AI is essential for a successful business in the present might also be falling behind. As corny as Shah’s approach towards trying to be an investors’ favourite by moving first in AI, the person who said that AI is not necessary, is also equally corny.
Too much generative AI
Interestingly, generative AI adoption has been brought in with enthusiasm and anxiety as well. A lot of leading companies’ CXOs told AIM that apart from a top-down push in generative AI adoption, a lot of it is also bottom up, where employees are pushing for generative AI education and adoption within companies.
In this push, companies are also creating and appointing generative AI and AI heads into their companies. A lot of it seems like the idea to hop onto the generative AI bandwagon, but at the same time, it is a push by both the investors and the employees. It does make us wonder how much of it is voluntary adoption, or forceful adoption of the technology, or if there is any useful adoption or not.
Conclusively, companies must carefully consider costing, expertise, data security, and in-house capabilities before adopting generative AI. Practical usefulness and understanding should be established, avoiding impulsive decisions driven by excitement or anxiety.
As the AI hype settles, businesses focus on efficient applications. Generative AI captivates investors and CEOs for optimising workflows. With maturing technology and specialised models, it will revolutionise industries worldwide, thus investor-CEO collaboration is key to harnessing its full potential in the modern workplace. Till then, CEOs will be pushed to say – “we are using generative AI”.
Funnily enough, without accelerating AI adoption, investors might consider replacing CEOs with AI models optimised for maximum ROI. Gradually, these AI CEOs could replace other executives as well, offering the additional advantage of not demanding exorbitant compensation, unlike human CEOs.
The post CEOs are Pushed to Say “Generative AI” appeared first on Analytics India Magazine.
During Web 2.0, Google and other browser companies made themselves profitable by selling user data to advertisers. To break into this monopolistic market, newcomers needed a unique proposition that set them apart in the market.
While many browsers used privacy as a selling factor, Brave browser offered a standout feature; that of rewarding users for their anonymised data. This was powered by a blockchain-based token called Basic Attention Token (BAT). Now that the blockchain hype has died down, it seems Brave is jumping on the AI bandwagon by selling an API for AI training data.
Collating and selling training data has become one of the hottest growing markets in the generative AI wave. Recognising this, many top text-based platforms, like Twitter and Reddit, have locked down access to their APIs. Even companies with focus on data security and privacy have thrown these ideals to the wayside in a bid to make money off the AI wave.
Reports have emerged that privacy-focused browser Brave is making a business by selling access to a paid API of web data. Reading the fine text for the API has led many to question Brave’s strong privacy and security stance while raising ethical issues about the copyright of the content.
Brave Search API explained
In a bid to capitalise on the AI wave, the Brave search API offers plans targeted specifically for use in AI models. Subscribers to the paid API get results from the web, access to Brave’s news cluster, as well as “rights to use data for AI inference”. Aiming to feed the ever-growing appetite of AI algorithms, it seems that Brave has resorted to selling the Internet.
As mentioned previously, there is another alternative to Brave’s search API, namely Bing’s competing offering. However, the main difference is that Bing does not mention using the API for training user data, which could be due to a combination of vested interests in OpenAI and to avoid possible copyright kerfuffles.
Brave, on the other hand, does not seem to have any issues in distributing web content for free. According to research by Alex Ivanovs at StackDiary, the output from the Brave web search for AI API extracts up to 260 words in a machine-readable format through its ‘Extra Snippets’ feature. While these are functionally similar to Google’s Featured Snippets, they routinely extend above 150 words, which stretches the limits of what is allowed under Fair Use.
In addition to the Extra Snippets feature, Brave also offers rich and structured web result data through Schema and access to its FAQs and Discussions features. This combination of features would allow any paying customer of the API to extract valuable data in a certain domain and even use it to fine-tune trained models.
To build this database, Brave makes heavy use of its own crawler, which has indexed over 8 billion pages over the course of its functioning. Moreover, it crawls over 40 million pages every day, contributing further to the ever-growing index of the search engine. However, by selling this data for a monthly fee, there is a case to be made that Brave is in violation of copyright standards like CC BY-NC-ND, which expressly prohibit using content for commercial purposes.
While there is a possibility that Brave is being safe about the type of data it indexes, it is difficult to prove this. Moreover, once copyrighted data has been used to train an AI model, there is no provenance to trace the data’s source. This, coupled with the recent trend of API selling, has the potential to set a bad example for the rest of the industry.
Selling what they don’t own
APIs famously began with commercial roots, spearheaded by Salesforce’s automation API, which is widely considered to be the first API in the world. However, this trend quickly shifted to websites providing services in an XML or JSON format, mostly for free. Facebook’s API launch arguably played a big part in its growth, and Flickr’s API is ever-present in websites from the 2000s.
However, with the value ascribed to data thanks to AI, companies are walking back down the route of closed and paid APIs. It seems that APIs are now going back to being a sure-shot way to monetisation, mainly thanks to the value of high-quality training data. Even in this market, Brave is treading dangerous ground.
Apart from the API service, Brave also offers a ‘bespoke, large-data solution’ for companies looking to build a product beyond the API’s capabilities. This also seems to suggest that Brave has a dataset, similar to LAION, that encompasses the entirety of the Internet. This approach has been shown to be risky, as evidenced by the recent spate of copyright lawsuits against AI companies.
Even industry leaders like OpenAI and Meta come under fire recently for its wanton use of copyrighted materials to train its algorithms. In a class-action lawsuit headed up by author Sarah Silverman, OpenAI and Meta were accused of using copyrighted books from shadow libraries like Library Genesis and Z-Library as training data.
As AI continues to eat more of the Internet, companies are looking to make a quick buck by selling more of this data. However, without adequate protection against copyright laws, these services find themselves increasingly on the grey side of the law.
The post Brave Browser Is Selling The Internet appeared first on Analytics India Magazine.
Artificial intelligence (AI) has grown in popularity during the past few months, but some fields, including education, remain controversial. Much of this concern is focused on students' potential to use a generative AI tool, such as ChatGPT, to do their work, including writing an essay or creating code.
Some professors allow the technology in their classrooms, others forbid it, and others permit it at their discretion, which might include scrutinizing all students' work with GPT detectors. A recently published, peer-reviewed paper from Patterns shows researchers found that programs built to detect whether text was generated by AI or humans would more often falsely label it as AI-generated when it was written by non-native English writers.
Also: This new AI tool transforms your doodles into high-quality images
In the study, the researchers tested the performance of seven widely used GPT detectors, with 91 essays written for the Test of English as a Foreign Language (TOEFL) by Chinese speakers, and 88 essays written by U.S. eighth-graders, which were obtained from the Hewlett Foundation's Automated Student Assessment Prize (ASAP).
The GPT detectors accurately classified all U.S. student essays, but incorrectly labeled an average of 61% of the TOEFL essays as AI-generated. One of the detectors incorrectly flagged 97.8% of the TOEFL essays as generated by AI.
Also: These are my 4 favorite AI chatbot apps for Android
The research also found these GPT detectors are not as effective at catching plagiarism as their users may believe. Many of the detectors advertise 99% accuracy without evidence to back up the claims.
The researchers generated essays using ChatGPT and 70% were spotted as AI-generated by the GPT detectors. But simple prompts, such as asking ChatGPT to "elevate the provided text by employing literary language", improved the text enough to reduce that figure to 3%, which meant the GPT detectors then incorrectly determined the essays were written by humans 97% of the time.
"Our current recommendation is that we should be extremely careful about and maybe try to avoid using these detectors as much as possible," said senior author James Zou, from Stanford University.
Also: Real-time deepfake detection: How Intel Labs uses AI to fight misinformation
The authors attributed the errors to GPT detectors favoring complex language and penalizing simpler word choices that are commonly used by non-native English writers. They found the TOEFL essays exhibited lower text perplexity, which "surprised" an AI model. If the next word in an essay is hard for the GPT detector to predict, then it is more likely to assume a human wrote the text; if the opposite is true, it will assume AI created it.
"If you use common English words, the detectors will give a low perplexity score, meaning my essay is likely to be flagged as AI-generated. If you use complex and fancier words, then it's more likely to be classified as human written by the algorithms," Zou explained.
Also: This is how generative AI will change the gig economy for the better
Detecting AI-generated content, in general, can be difficult, which is why detection methods in the form of third-party computer programs have become popular. The research suggests, however, that these tools can marginalize non-native English writers in evaluative and educational settings.
"It can have significant consequences if these detectors are used to review things like job applications, college entrance essays or high school assignments," Zou explained.
Paradoxically, the study points out there is potential for GPT detectors to push non-native English speakers to use more generative AI tools in an effort to evade detection and improve their language skills, which would help them avoid the potential harassment and restricted visibility that could result from being discriminated against.
Cycle Se Chand Tak – ISRO ke Udaan – Chandrayaan-3 is finally expected to touch down on the Moon’s southern pole, on August 23, 2023, at 5:47 PM IST onwards. This is unlike anything ever seen before as it would be the first to land at the lunar south pole. Previous moon landings have primarily occurred in the equatorial region, with the furthest landing from the equator being Surveyor 7 near 40 degrees south latitude.
Scientists from ISRO are interested in exploring the lunar poles due to the possibility of presence of water as ice molecules and hydroxyl on the lunar surface in the deep craters, which was indicated by Chandrayaan-1in 2008.
The South Pole, in particular, is considered more promising for finding water ice due to its larger area in permanent shadow and colder temperatures. The presence of water ice at the South Pole makes it an intriguing location for studying the early solar system and Earth’s history. The South Pole-Aitken basin, a massive crater, further adds to the geological interest of the region, as it could contain material from the Moon’s deep crust and upper mantle.
This region is highly sought-after area for both space agencies and private space companies due to its water ice deposits, which have the potential to support the establishment of a future space station. The success of India’s mission in this crucial location could be a groundbreaking development and bring about significant changes in field of deep space exploration.
Challenges Galore
Despite the scientific potential, exploring the South Pole poses challenges due to difficult terrain, extremely low temperatures below -230 degrees Celsius, lack of sunlight in certain areas, and the presence of large craters.
To address these challenges, ISRO scientists have developed a new algorithm embedded in Chandrayaan-3’s software. Unlike Chandrayaan-2, which relied on interpreting speed from static images, the new technology in Chandrayaan-3 estimates the spacecraft’s speed in real-time as it descends towards the lunar surface. This innovative approach enhances landing safety and increases the mission’s chances of achieving a successful touchdown.
The legs of the Chandrayaan-3 lander have also been reinforced to enable it to land safely and stabilise even at a speed of 3 m/s or 10.8 km/h. This improvement is crucial to prevent a rough landing similar to what happened with Chandrayaan-2, where control was lost just 7.2 km from the lunar surface. The strengthened legs increase the chances of a successful landing and reduce the risk of other potential troubles.
To enhance manoeuverability, the Chandrayaan-3 lander carries a larger fuel tank compared to its predecessor. This extra fuel allows for last-minute adjustments to the landing site, enabling the spacecraft to change course if it detects unstable surface conditions. The additional fuel enhances the lander’s flexibility during the descent.
The Chandrayaan-3 lander is equipped with solar panels on all four sides, ensuring continuous power supply even if it lands in an unfavourable direction or experiences tumbling. This design ensures that at least one or two sides of the lander will always face the Sun, providing uninterrupted solar energy.
Enhanced navigation and guidance capabilities are incorporated into Chandrayaan-3. It features new instruments, such as the Laser Doppler Velocimeter, to monitor the lander’s speed and make necessary corrections. The software has been updated with improved hazard detection and avoidance algorithms, as well as upgraded navigation and guidance software. Multiple layers of redundancies are in place to ensure system reliability in case of any failures.
To ensure the lander’s resilience, extensive stress tests and experiments, including helicopter drops, have been conducted. ISRO has created various test beds to simulate lunar landing conditions, allowing comprehensive assessments of the lander’s performance and durability.
With these advancements and rigorous testing, ISRO is confident in the launch of Chandrayaan-3, having considered various probabilities and taken necessary precautions to increase the mission’s chances of success.
AI-Powered Moon Mission
Chandrayaan 3 consists of the Pragyan rover similar to Chandrayaan-2 but does not have an orbiter. The Pragyan rover is equipped with advanced AI technology, enabling it to communicate with the Vikram lander. This technology also assists the rover in various tasks and operations.
Once the Pragyan rover slides off the lander and lands on the lunar surface, it will embark on a comprehensive exploration journey. One of the key capabilities of the Pragyan rover is its ability to move and land on the lunar surface. It utilises motion technology to navigate the challenging terrain and successfully reach its designated landing site. Additionally, the rover’s AI algorithm plays a crucial role in identifying traces of water and different minerals on the lunar surface. This capability allows the rover to collect valuable data and send images back to Earth for research and testing purposes.
Deep learning techniques have been used to enhance Chandrayaan-3’s autonomous capabilities. AI applications enable automatic landing, intelligent decision-making, and completely automated systems. By leveraging AI, the mission becomes more self-sufficient, independent, and capable of going beyond human limitations in identifying discoveries and transmitting data back to Earth.
Pragyan is specifically designed for a dedicated research period of 14 days on the lunar surface. During this time, the rover conducts sophisticated scientific measurements and geological studies, enabling deeper exploration and analysis of the lunar terrain.
The location where Pragyan is supposed to land is chosen strategically to maximise the quality of data gathered. While it may not receive much sunlight, the rover’s primary objective is to gather extremely high-quality data, making the positioning essential for successful research.
In the broader context of space exploration, digital approaches continue to play a significant role. Data analytics is crucial for predicting, anticipating, and minimising weather hazards. AI is extensively used in developing autonomous space vehicles capable of self-diagnosis and self-repair. Virtual reality is utilised for simulated training in space missions, and digital twinning provides answers to hypothetical scenarios, which is particularly useful in relatively unexplored and unpredictable terrains.
There’s More
As for the scientific instruments on the Pragyan rover, it is equipped with cameras for imaging purposes. Additionally, it carries an alpha-proton X-ray spectrometer (APXS) and a laser-induced breakdown spectroscope (LIBS) for detailed analysis.
The APXS instrument aims to determine the elemental composition of the lunar surface near the rover’s landing site. It achieves this by utilising X-ray fluorescence spectroscopy, where X-rays or alpha particles are used to excite the surface. The APXS employs radioactive Curium (244) metal to emit high-energy alpha particles and X-rays, enabling both X-ray emission spectroscopy and X-ray fluorescence spectroscopy. Through these sophisticated techniques, the APXS can detect major rock-forming elements such as Sodium, Magnesium, Silica, Aluminium, Calcium, Iron, Titanium, as well as trace elements like Strontium, Yttrium, and Zirconium.
On the other hand, the LIBS instrument’s primary objective is to identify and measure the abundance of elements near the landing site on the lunar surface. It achieves this by firing high-powered laser pulses at various locations and analysing the radiation emitted by the decaying plasma.
Creating History, One Rocket at a Time
The launch on July 14, 2023 at 2:35 PM local time garnered significant attention, with over 1.8 million viewers tuning in to watch the event live on ISRO’s YouTube channel.
In addition to the online viewership, thousands of people witnessed the launch firsthand from the viewer’s gallery Satish Dhawan Space Centre in Sriharikota. Commentators described the sight of the rocket “soaring in the sky” as a truly “majestic” spectacle. The moment of liftoff was met with enthusiastic cheers and loud applause from the gathered crowds, including the scientists involved in the mission.
Inside the launch site, the hall reverberated with roars of “Bharat Mata ki jai” (Victory to mother India) coming from every corner, emphasizing the immense national pride associated with the mission.
The Chandrayaan-3 spacecraft was launched aboard Launch Vehicle Mark-3 (LVM3) rocket. The rocket carried an uncrewed six-wheeled lander and rover module, configured with payloads to provide data related to the Moon’s surface, representing the aspirations of 1.4 billion Indians.
After approximately 16 minutes, Chandrayaan-3 successfully separated from the LVM3 and entered Earth’s orbit, marking the beginning of its fuel-efficient journey towards the moon. If the rest of the mission unfolds as intended, India will soon join the United States, the former Soviet Union, and China as the fourth country to achieve a moon landing.
ISRO’s Chairman, Sreedhara Panicker Somanath, made his first comments following the successful lift off, announcing, “Chandrayaan-3 has begun its journey towards the Moon. Our launch vehicle has put the Chandrayaan on the precise orbit around the Earth.” ISRO further tweeted that “the health of the spacecraft is normal,” indicating a positive start to the mission.
The mission aims to achieve three objectives: a safe and soft landing on the moon’s surface, demonstrating rover abilities, and conducting in-situ scientific experiments.
The launch signifies India’s second attempt at achieving a soft landing on the moon’s surface. This endeavour comes nearly four years after the Chandrayaan-2 mission faced a setback when its lander-rover pair crashed into the lunar terrain in 2019—which investigatiosn revealed were due to malfunction in its software hardware components.
Soft landing which ISRO Chief Somanath has described as “15 minutes of terror” is a set of critical tasks the lander must perform to land on the lunar surface. These tasks include firing engines at precise times and altitudes, utilising the right amount of fuel, conducting accurate scans of lunar surface features like hills and craters, and ultimately achieving a successful touchdown.
As a result, the software and hardware of Chandrayaan-3 have been equipped with additional capabilities to address these identified problems.
The budget of the mission has been compared with a popular Indian movie which had a similar budget but bombed at the box office, saying that this is a better use of resources. K Sivan, the then ISRO chairman, had stated in 2020 that this ambitious and domestically developed mission comes at a relatively modest cost of around 615 crore. Of this amount, Rs 250 crore was designated for the lander, rover, and propulsion module, while Rs 365 crore was allocated to cover the costs of launching the mission.
The post Chandrayaan-3: India Pedaling from Cycle Trails to Lunar Trials! appeared first on Analytics India Magazine.
When Uncle Sam told that the rise of generative AI chatbots like ChatGPT is going to replace customer service jobs, he wasn’t wrong.
Turns out that US-based AI startup Air AI has introduced the world’s first-ever AI-based customer support that can have full-on 10-40 minute long phone calls that sound like a real human, easily replacing the customer service jobs. Interestingly it also bought the domain name of ‘air.ai’ as well, making customer support synonymous with AI.
It is like the ChatGPT moment of BPO.
With unlimited storage capacity and the ability to remember, it is capable of performing tasks across more than 5,000 applications autonomously, without any training, supervision, or motivation. It is similar to having instant access to 100,000 sales and customer service representatives. More than 50,000 companies have already signed up on the waiting list.
AI customer service agents have arrived. Air AI is a conversational AI that can perform full 5-40 minute long sales and customer service calls. And it sounds completely replicable to humans. Details: -Can autonomously perform actions across 5,000 unique applications… pic.twitter.com/zIT2Ps4rAn
— Rowan Cheung (@rowancheung) July 17, 2023
Human Customer Support Dead?
The news has garnered widespread attention with mixed reviews. However, conversations around the potential impact of AI and automation on jobs and the need for governments to consider implementing a Universal Basic Income (UBI) is now more than ever.
Last week, several industry experts from the BPO sector told AIMthat generative AI will not result in job losses but rather improve productivity and empower employees. They highlight how generative AI can streamline repetitive tasks, enhance decision-making, and improve customer service, allowing employees to focus on more complex and value-added tasks. Companies embracing these technologies are expected to outperform those that do not.
On the contrary, Indian startup Dukaan, laid off 90% of its customer support staff and introduced an AI chatbot called Lina to replace them. This move was aimed at allowing the company to benefit from artificial intelligence, as stated by its founder, Suumit Shah. By leveraging AI, Dukaan claimed to have decreased its expenses for customer support by 85%. Additionally, the response time for initial queries has been reduced from 1 minute and 44 seconds to an immediate response.
Meanwhile, as per several Reddit users, while AI has shown potential in addressing common issues, striking the right balance between AI automation and human assistance is crucial.
Despite the immense potential of AI, human assistants are still necessary for specialised situations.
Generative AI can enhance productivity and empower employees, but human empathy and specialized assistance remain essential. Transparent disclosure of AI usage and ensuring a seamless transition between AI and human interaction are necessary for optimizing the customer experience. Moreover, the ethical implications and potential consequences of widespread AI implementation should be carefully considered.
What’s Under the Hood
Founded in August 2022 by Caleb Maddix, Ethan Wainer and Thomas Lancer, Air AI is the software platform that helps businesses get to market faster, cheaper, and more effectively than ever before. Although not much information about the company’s background including investors is not publicly available, we have reached out to the company for more details.
Even though the company has not revealed how it created automated customer support, there are several ways to do it. Earlier this week, Indian-origin researcher Prady Modukuru created a remarkable tool that effortlessly translates any video into the user’s preferred language. By leveraging GPT-4 and prompt engineering techniques, Modukuru translated a recent conversation snippet between Meta CEO Zuckerberg and Lex Fridman into Hindi, ensuring accurate voice representation using ElevenLabs’ voice training and text-to-speech capabilities. He achieved precise lip-syncing through wav2lip-2, an AI-based solution generating lifelike lip movements.
There are other models as well that can be used to make it possible. For example, Synthesia excels at creating videos with AI avatars, supporting multiple languages and templates. Gen-2 combines text, images, and video clips for diverse content. Murf converts text to customisable voice-overs. Wav2Lip syncs speech with facial expressions. Retrieval-based Voice Conversion changes voices using neural networks. So-vits-svc trains AI models for desired voices and languages. Pictory and DeepBrainAI convert scripts into videos effortlessly.
Read more: Worse is Far from Over. More Tech Jobs to be Slashed in Future
The post RIP, Human Customer Service appeared first on Analytics India Magazine.
Viral TikTok trends can occasionally teach you a thing or two about technology, and the AI headshot-generation trend is a prime example. In this trend, users take selfies and convert them into high-quality headshots with the touch of a button. Here's how you can do it, too.
Also: Expedia adds new AI features to improve your travel planning. Here's how
Getting the perfect business headshot can be a challenging feat since you need to hire a photographer and then cross your fingers that you like any of the photos enough to make it your LinkedIn profile picture or official professional headshot.
Therefore, when I saw the TikToks of people using AI to easily generate a professional-looking headshot, I had to put it to the test, and below is how I did it and what I found.
Set up your account
To get started, you have to download the Remini app, which is available for both iOS and Android. Then you can follow the app instructions to set up your account.
First, it will ask you to click on a "get started" button, which will take you to a subscription page. Here you can select to start a free, three-day trial. Beware that if you don't cancel that subscription plan after the trial is over, it will charge you a whopping $10 a week.
Also: The best AI art generators: DALL-E 2 and other
To finish setting up your account, give Remini access to your photos because it needs between eight and 12 of them to create your AI-generated headshot.
Create your headshot
Once you are all set up, click on the bottom bar where it says "AI Photos". Here you will click on "Generate my photos" and be prompted to select 8-12 photos of yourself that showcase your face clearly and have good lighting.
After that, you can scroll through all the many models and click on the one that you'd like your face to be rendered on. Once you do that, you will have to wait a couple of minutes while your image generates. You can turn on notifications to receive an alert when your images are ready.
When the images generate, you will have six different options to scroll through, which you can delete or save as you scroll. In my experience, some models were fantastic while others left much to be desired.
Final verdict
It takes some playing around, but I was personally SHOCKED by how much my favorite image looked like me. The image even had some unique features on my face down pat, like my slightly crooked smile. You can see that result in the image at the top of the article.
I have covered how to use AI to make a headshot before, but — unlike with the Canva Pro technique — you have to put in zero work with this trend.
Also: I've tested a lot of AI tools for work. These are my 5 favorite so far
Yes, some images are absolutely random or bizarre. My personal favorite was when, out of nowhere, the AI gave me a short green bob that I didn't ask for.
But if you even get one good headshot that you really like from randomly submitting 8-12 photos, that sounds good enough to me.
Also: This new AI tool transforms your doodles into high-quality images
In terms of cost, the free trial seems absolutely worth it. A $10-a-week subscription is much too steep for my liking, but perhaps it would be worth it if you subscribed for just one week, got the headshots you needed, and then unsubscribed.
AI is eating itself. The internet has now become an AI dumping ground. The models that are being trained on the web are feeding on its own kind aka data cannibalism.
In an article for The New Yorker, acclaimed science fiction author Ted Chiang drew attention to the perils of AI copies breeding copies—a digital photocopying of sorts. He likens this burgeoning dilemma to the JPEG effect, where each subsequent copy degrades in quality, revealing a mosaic of unsightly artefacts. As the boundaries of AI replication blurs the question to be wondered upon is — what happens as AI-generated content proliferates around the internet, and AI models begin to train on it, instead of on primarily human-generated content?
Recent findings by researchers from Britain and Canada, state that generative AI models exhibit a phenomenon known as “model collapse.” This degenerative process occurs when models learn from data generated by other models, leading to a gradual loss of accurate representation of the true data distribution. Remarkably, it is deemed unavoidable, even in scenarios where the conditions for long-term learning are nearly ideal.
Repercussions
According to Ross Anderson, a professor of security engineering at Cambridge University and co-author of the ‘model collapse’ research paper, the internet is at risk of being flooded with insignificant content, similar to how the oceans are littered with plastic waste. This flood of content could hinder the training of new AI models through web scraping, benefiting firms that have already amassed data or control large-scale human interfaces. He further mentioned the recent Internet Archive fiasco when AI startups were aggressively mining the website for valuable training data, further highlighting this concern.
According to a recent report by media research organisation NewsGuard, an alarming trend has emerged in which websites are being filled with AI-generated junk content to attract advertisers. The report reveals that over 140 prominent brands unknowingly end up paying for advertisements displayed on websites powered by AI-written content.
This growing spammy AI-generated material poses a threat for the very AI companies responsible for these models. As training data sets become increasingly saturated with AI-produced content, concerns are being raised regarding the diminishing utility of language models.
“There are many other aspects that will lead to more serious implications, such as discrimination based on gender, ethnicity or other sensitive attributes,” Ilia Shumailov a research fellow at Oxford University’s Applied and Theoretical Machine Learning Group said, especially if generative AI learns over time to produce, say, one race in its responses, while “forgetting” others exist.
Inclusivity All The Way
The current models are already in the bad books of AI ethicists for their lack of inclusivity. In 2021, a group of researchers warned about the white male problem in language models. Lead author Anders Søgaard, a professor at UCPH’s Department of Computer Science, explains that these models exhibit systematic bias. Surprisingly, they align best with language used by white men under 40 with shorter educations, while showing the weakest alignment with language from young, non-white men. This discovery emphasises the pressing need to address and rectify the biases within language models to ensure fairness and inclusivity for all.
On the same line, Shumailov said, “To stop model collapse, we need to make sure that minority groups from the original data get represented fairly in the subsequent datasets.”
While some companies are working towards a more inclusive AI — like Meta’s recently released open-sourced consent-driven dataset of recorded monologues called ‘Casual Conversations v2‘. This enhanced version has been built to serve a broad spectrum of use cases. By offering researchers a robust resource, it empowers them to evaluate the performance of their models with greater depth.
But on the other we have Google which has been in the news for not so good reasons since renowned AI ethicist Timnit Gebru was fired followed by the exit of the rest of the team calling Google a ‘white tech organisation’.
Flawed, not useless
While language models have a long, long list of ethical defects they come along with an array of advantages. For instance, Shumailov along with his team originally called ‘model collapse’ — the effect model dementia, but decided to rename it after objections from a colleague. “We couldn’t think of a replacement until we asked Bard, which suggested five titles, of which we went for The Curse of Recursion,” he wrote.
Currently, language models are becoming a part of every second company’s strategy. Firms in every sector are learning to unlock the full potential of the advanced chatbots based on language models like GPT-4. While it is too early to judge, companies are shifting their way through the generative AI clutter trying to figure out the best use cases for their business.
The post The Dark Consequence of AI’s Data Cannibalism appeared first on Analytics India Magazine.
Rakuten India, in close association with Analytics India Magazine, recently hosted the third edition of the Rakuten Product Conference (RPC) 2023. The event centred around ‘Generative AI and the Future of Cloud’ attracted a whopping 6,300 participants from different parts of the world, echoing the global significance of the rapidly evolving technology.
The conference spanned two days, delving into the vast potential of Generative AI and Cloud Computing.
Embracing Generative AI
Day one commenced on an exhilarating note on Generative AI by Sunil Gopinath, CEO of Rakuten India. Following him, Yasufumi Hirai, Ting Cai, Taku Okoshi, and Nakane Tsutomu, the Consul General of Japan in Bengaluru, and other esteemed figures within the Rakuten Group and from the industry, shared the platform to deliver their insightful keynotes.
Emphasising the need to put people at the centre of technology, Yasufumi Hirai, Group Executive Vice President and CISO of Rakuten Group said “Many tech engineers start their sentences with technology as the subject. But the subject of any sentence should be people; society; the world; organisations; family – technology can be the enabler to realise something amazing.”
Diving into the world of Generative AI, several key personalities addressed the audience, leaving them with much food for thought. Ankush Sabharwal, CEO at CoRover presented the fascinating topic of BharatGPT, a large language model that supports multiple languages and exemplifies the transformative power of AI. Nischal Nadhamuni, Founder & CTO at Klarity furthered the conversation by discussing the utilisation of Generative AI to automate complex finance and accounting workflows.
“We want to leverage AI to synthesise and generate information efficiently. But creativity, emotion, the empathy, is what human beings are uniquely good at. We hope to leverage AI to augment human creativity and unleash infinite possibilities,” said Ting Cai, Senior Managing Executive Officer and CDO, Rakuten Group.
Dr. Nanda Kambhatla, Sr. Director at Adobe Research India explored the implications of Generative AI for creators and communicators, outlining its potential to reshape the landscape of content generation and communication. Vishwesh Pai, Sr. Director of Product, AI Platform, ServiceNow on the other hand, addressed the challenges and benefits of creating generative AI-based experiences specifically for enterprise customers, a pressing topic given the increasing adoption of AI in business processes.
“As a machine learning (ML) researcher, the game has changed. It used to be that ML meant that for every task you want to do, you collect some data, train a model on it, fine tune it and deploy it for inference. But now, we have foundation models, which are mammoth models. This is becoming the new way of doing AI.” said Kambhatla highlighting the potential of generative AI for all.
The day’s events were neatly wrapped up with a comprehensive panel discussion on ‘Chat GPT/Large Language Models.’ The session was a robust exploration of the present landscape, potential, and associated challenges of these models in the organisational context.
The Future is in the Cloud
Day two focused on the future of cloud technology, with insightful presentations by Tsubasa Shiraishi, Vice Chairman of Rakuten India. His opening remarks were followed by compelling keynotes from Rakuten Group’s Akihito Kurozumi and Rohit Dewan, and Fergal Downey from Rakuten Marketing Europe Limited.
Industry stalwarts including Rajat Pandit from Google, Partha Seetala from Rakuten Symphony, Dr. Rahul Ghodke from CGI, and Nitin Mishra from ONDC, took to the stage to share their knowledge and insights on the future of cloud technology. Lory Kehoe, the founder of Blockchain Ireland, offered an unconventional perspective on entrepreneurial ecosystems, elucidating the birth and evolution of such systems.
In a talk that was particularly relevant to India’s rapidly expanding e-commerce sector, Sandeep Kohli, Vice President of Engineering at Flipkart, outlined how next-generation multi-cloud is driving a revolution in e-commerce, fundamentally enhancing customer experiences.
Kirthi Ganapathy, Specialist Customer Engineering Leader of Google Cloud India, touched on the important topic of ethics in AI, explaining that “in a time when the gap between left and right, liberal and secular is seemingly widening, where gender equity and biases are being stretched, responsible AI needs to ensure that it walks within the line in order to maintain neutrality in all aspects.”
The panel discussion that followed was both informative and invigorating. The attendees were provided with a comprehensive understanding of how cloud technology is set to disrupt businesses in the future. Nalini George, Chief People Officer of Rakuten India, closed the session and the conference with a vote of thanks. Her closing remarks underscored the conviction that cloud technology will continue to shape our future, resonating strongly with the audience.
Impact of the Conference
RPC 2023 was a melting pot of disruptive ideas. It offered a platform that spurred discussions on the limitless potential and imminent challenges in the field of Generative AI and Cloud Computing. Rakuten India, with its dedicated centres of excellence, showcased its leadership and commitment towards fostering a technology-forward environment, pushing the boundaries of mobile application development, data analytics, engineering, DevOps, and information security.
The event was also a conduit for networking, enabling attendees to connect with a broad spectrum of professionals – data artists, innovators, and researchers driving AI strategies in their organisations. This exchange opened doors for potential collaborations and long-lasting relationships within the industry, thus reinforcing the community’s collective progress towards a future shaped by Generative AI and Cloud Computing.
The post Rakuten India Successfully Hosts the 3rd Edition of RPC 2023 – Unravelling the Prospects of Generative AI & Future of Cloud appeared first on Analytics India Magazine.
Google Chat APIs are now publicly available to all Workspace developers for them to create Chat spaces and add members programmatically on behalf of users in their new Developer Preview program. The main highlight of this Developer Preview functionality is the introduction of “import mode” spaces. These spaces enable Chat apps to preserve the original timestamps of spaces and messages, maintaining the context and order of the imported data as users would expect. Import mode spaces also prevent notifications and restrict end users from accessing these spaces while the legacy data is being imported.
The Google Chat API allows developers to create user-facing applications to integrate workflows into Chat and provide relevant information within conversations. Chat apps enable users to receive comprehensive details and link previews directly from internal and third-party systems. This asynchronous approach allows users to catch up and address issues promptly.
Several developers have already started using these new APIs to foster collaboration among their customers. Paris-based LumApps enables its users to initiate direct messages in Google Chat directly from their user directory. This functionality allows users to quickly connect with others based on job titles, roles, departments, or other attributes.
Read more: Google Unveils SoundStorm for Parallel Audio Generation from Discrete Conditioning Tokens
Google’s Unwavering Focus on Workspace
During this year’s Google I/O conference, Google introduced several new features powered by generative AI for its Workspace suite, which includes Google Docs, Slides, and Sheets. One notable update was for Gmail, where they introduced a new generative AI extension called ‘Help Me Write.’ This extension builds upon the functionality of Smart Reply and Smart Response. These advancements are part of the Duet AI for Workspace product and will initially be available to trusted users and testers before a wider release later this year.
The main goal of Duet AI is to enhance productivity by leveraging AI capabilities to foster seamless collaboration and boost user creativity. The new features are designed to improve user productivity by offering more comprehensive and polished responses. These updates will be released alongside Workspace updates, thereby enhancing the productivity and convenience of Google’s email platform.
Read more: Top 8 Use Cases of Bard’s New Image Recognition Update
The post Google Chat APIs Are Now Publicly Available appeared first on Analytics India Magazine.
Before ChatGPT and generative AI, it seemed like all we heard about was the metaverse; before that, it was blockchain, NFTs, cryptocurrency, the cloud, the IoT, all the way back to the dot-com bubble and Y2K. Yet now, Stability AI founder and CEO Emad Mostaque said that he predicts artificial intelligence will be "the biggest bubble of all time" during a call with UBS analysts last week.
Mostaque claims that AI is still in its early stages and "not quite ready" for mass-scale adoption in most industries, like banking, as was the context of his call with UBS analysts, "but we can see the value."
Also: This is how generative AI will change the gig economy for the better
"I call it the 'dot AI' bubble, and it hasn't even started yet," Mostaque explained, referencing the dot-com bubble, also known as the Internet bubble, from the late 1990s and early 2000s. The dot-com bubble saw many internet startups appear, with high valuations and optimistic investors convinced that the internet would revolutionize different industries. This resulted in a rapid rise in stock prices for internet-related companies.
The dot-com bubble burst between 2000 and 2001 when the profits failed to materialize due to overvalued, unsustainable businesses, the stock prices plummeted, and many internet companies went bankrupt.
Also: Real-time deepfake detection: How Intel Labs uses AI to fight misinformation
"This will be one of the biggest investment themes over the next few years," Mostaque added. According to the Stability AI founder and CEO, AI is a $1 trillion investment opportunity, and companies in the financial services industry, specifically banks, should embrace AI or risk being "punished" by the stock market.
With this statement, Mostaque specifically referenced Google Bard's disappointing initial launch, where the company lost $100 billion just one day after its AI chatbot was released and showcased putting out incorrect information.
Also:This new AI tool transforms your doodles into high-quality images
During the call, Mostaque discussed his perspective on the potential of AI as an investment opportunity and its significance across various industries, like banking. AI is widely used in medicine, transportation, robotics, education, and defense.
Stability AI is an open-source company behind Stable Diffusion. This generative AI image creator has gained popularity along with the growing acclaim of competitors like Bing Image Creator and Midjourney and AI chatbots like ChatGPT, Google Bard, and Bing Chat.