We are deep into 2023, and over 2,16,910 people have already received the dreaded pink slips this year, as per layoff.fyi. The figure marks a staggering 315% increase from the previous year’s job cuts, signalling an impending crisis. Almost all the tech companies have frozen hiring, deferred joining, and even paused salary hikes for now.
According to media reports, Indian IT giant Infosys has delayed the salary hike for its non-senior management staff which was supposed to come into force from July. Merely a month ago, Amazon postponed the joining of graduates by at least six months amid widespread job cuts. A fresh graduate from Kolkata’s Jadavpur University, who was slated to join as a Software Development Engineer (SDE-1) in June, told AIM, that he received the news of the deferral at the end of May. However, none received the compensation for the delay.
Not only did Amazon delay start dates, the company also shut down its Halo Health division in July. When combined with the previous layoffs from January this year, the total number of job cuts at Amazon amounted to 27,000, which is approximately 8% of its workforce and the biggest in its history.
Wipro also reduced the salary of fresh hires by 50% taking it from Rs 6.5 lakh per year to Rs 3.5 lakh per year in April. Starting last month, Wipro has stopped recruiting new employees, who are asking for more than 30% salary hike. Instead, the company is focusing on implementing automation solutions to handle the majority of tasks typically handled by HR.
This month, Microsoft announced additional job cuts, following the 10,000 layoffs announced in January, 2023.
Following the suit of Microsoft, Meta, currently basking in the success of Threads, also terminated 6,000 employees in May. Since November last year, a total of about 21,000 people have lost their jobs at Meta.
Apple, which managed to navigate through the layoffs last year relatively smoothly, eventually succumbed to them by laying off a small number of employees in its corporate retail team in April.
Alphabet, Github, Twitter, Accenture, Linkedin, Netflix, and Salesforce are among the hundreds of companies that have downsized their workforce this year.
But Why Is This Happening?
Whoever thought that as the calendar flipped to 2023, layoffs would cease, they were extremely wrong.
It was already predicted by the World Economic Forum that a global recession in 2023, further intensified by geopolitical tensions such as the Russia-Ukraine conflict.
Just a day ago, US Treasury Chief Janet Yellen warned about the ongoing risk of a recession amid persistently high inflation.
Consumer price inflation has surged worldwide due to the geopolitical crisis, resulting in weakened consumer demand. This continuous rise in prices has compelled central banks globally to tighten monetary policies, a departure from the norm.
The Federal Reserve has been attempting to curb inflation by raising interest rates to their highest level in decades, which has resulted in slower economic growth and fears of a recession. While the central bank decided not to further increase interest rates last month, they indicated that additional rate hikes in 2023 are possible. The Federal Reserve stated that a recession in 2023 is likely but is expected to be mild and short-lived.
Plus, there is increasing investor pressure to implement a more assertive cost-cutting strategy. Given the current circumstances, there is an excess of employees, leading to high per-employee costs. Additionally, with work from the office coming back, companies are trying to compensate for the overhiring they made during Covid induced lockdowns.
India is at Risk
And India is not well cushioned to protect itself from the impact of it.
Indian IT services giants such as TCS, Infosys, Wipro, HCLTech, and others are expected to have limited revenue growth in the first quarter of FY24 due to the challenging macroeconomic conditions in the US and Europe.
Since nearly 80% of the Indian IT services industry’s revenue comes from North American and European markets, the economic uncertainty in these regions has resulted in reduced technology spending by companies.
As per a Naukri JobSpeak Index, white-collar hiring in India experienced a 3% decrease in June as industries like IT, Retail, BPO, Education, FMCG, and Insurance showed cautious hiring patterns.
The report revealed a year-on-year decline of 3%, with 2,795 job postings in June 2023 compared to 2,878 in June 2022. Additionally, month-on-month job postings decreased by 2%.
The IT industry particularly faced concerns, witnessing a significant 31% decrease in new job opportunities compared to the previous year.
Amid an impending global recession, geopolitical tensions, and inflationary pressures, big tech companies as well as the Indian IT have resorted to cost-cutting measures, including layoffs and deferred salary hikes, to mitigate the impact. Brace yourselves for this downturn is not going to be over anytime soon and the worse is far from over.
Read more: Behind Indian IT’s Mixed Emotions for LGBTQ+
The post Worse is Far from Over. More Tech Jobs to be Slashed in Future appeared first on Analytics India Magazine.
By Connor Lee, Incoming NYU Computer Science Student
Image created by editor with Midjourney
With the leap in AI progress making shockwaves throughout mainstream media since November 2022, many speculate their jobs will be taken over by their AI counterparts. One profession, however, cannot be possibly replaced: the researchers advancing deep neural networks and other machine learning models – the humans behind the AI. Although research is traditionally done within university walls, AI is by no means a traditional research field. A sizable portion of AI research is done in industrial labs. But which sector should aspiring researchers flock toward? Academia or industry?
“Academia is more inclined to basic fundamental research while the industry is inclined to user-oriented research driven by the large data access,” says Nitesh Chawla, a Professor of Computer Science and Engineering at the University of Notre Dame. Prof. Chawla points to the pursuit of knowledge as a separating factor between industrial and academic AI research. Within the industry, research is tied to a product, advancing towards a better society— while within academia, the pursuit of pure discovery drives research breakthroughs. The seemingly endless academic freedom does not come without its drawbacks, “academia does not have the data nor the computing access available,” according to Prof. Chawla.
For aspiring young researchers, the choice seems simple: the private sector has everything they could want. Vast, autonomous, commercial organizations striving toward innovation while supported by readily available data, computing power, and funding. This led to a perception that the industry is “stealing” talent away from academia. Academics, naturally, complain. A study published in 2021 by a team from Aalborg University pointed out that “increasing participation of the private sector in AI research has been accompanied by a growing flow of researchers from academia into industry, and especially into technology companies such as Google, Microsoft, and Facebook”.
As expected, industrial researchers disagree. “When I hire for my team, I want top talent, and as such I’m not poaching academic talent, but rather I am trying to help them get industry awards, funding from industry, and have their students as interns,” explains Dr. Luna Dong, a Principal Scientist at Meta who is the head scientist working on Meta’s smart glasses. She sees a glaring difference between industry and academia, which could be credited to the fundamental way research is conducted. According to Dr. Dong, AI research within an industry is conducted by knowing what the end product should look like and reverse engineering a path toward it. In contrast, academics, having a promising idea, continuously construct various paths, not knowing where those paths would lead.
Yet, despite these contrasts, Dr. Dong believes the industry helps academia and vice versa, “lots of industry breakthroughs are inspired by applying the research from academia on real use-cases.” Likewise, Computer Science Professor Ankur Teredesai from the University of Washington, Tacoma, describes the relationship between industry and academia as supporting each other, “symbiotic is the word that comes to mind.” As he views it, research practices have evolved into academics shifting their agenda to aid industry products — a good example of that shift would be joint positions within major corporations that some prominent professors are holding.
Regardless of their affiliations, the data science community converges together a few times a year at conferences. Prof. Chawla describes them as a “wonderful melting pot.” Some conferences are traditionally more academic, some purely industrial but some are a perfect blend of both. Prof. Chawla points to KDD, or the Special Interest Group on Knowledge Discovery and Data Mining, a conference known for such a connection. KDD maintains two parallel peer-reviewed tracks: the research track and the applied data science (ADS) track. As put by Dr. Dong, who was the ADS Program Co-Chair at KDD-2022, “KDD is helpful by providing a forum for researchers and practitioners to come together to listen to the talks and discuss the techniques while inspiring each other. KDD is a place where we break the barriers of communication and collaboration, where we demonstrate how data science and machine learning advances with industry consumption.”
This is the mindset that drove KDD from its early days. “One of the things we wanted to do from the very beginning was to create a conference where applications were well represented,” commends Prof. Usama Fayyad, Executive Director of the Institute for Experiential AI at Northeastern University and a former Chief Data Officer of Yahoo, who together with Dr. Gregory Piatetsky-Shapiro co-founded the KDD conference in 1995. Prof. Fayyad believes that if AI conferences were only focused on academics, it would be a big miss due to the collective desire to prove research on real problems and motivation to drive new research based on emerging data sets.
However, opening up KDD to the industry also had its challenges. With the research track being rightfully dominated by academia-originated work, the ADS track should have been primarily dedicated to applied studies coming from industrial research labs. In reality, more than half of ADS publications have their origins within academia or are a result of strong academic-industrial collaboration. A decade ago, Prof. Fayyad realized that many interesting AI applications were developed by teams that were simply too busy to write papers. He led KDD into its current phase, where KDD organizers venture and curate distinguished invited talks given by top industrial practitioners. The ADS invited talks have quickly become the highlight of the conference.
The KDD Cup competition held annually in conjunction with the KDD conference, is yet another way to connect the academic and industrial worlds. “KDD Cup is a way to attract both industry and academia participants where companies bring some of the challenges that they are comfortable sharing, while academics get to work on data they would never have access to,” describes Prof. Teredesai, who is also the CEO of a health tech company CueZen. Each year, a novel task is introduced and a new dataset is released. Hundreds of teams sprint towards the most effective solution, competing for prizes and fame. Prof. Fayyad agrees, “It's been a very healthy thing for the field because we see participation from academia, students diving in, or even companies teaming together”.
Circling back to the choice between industry and academia, it will soon become irrelevant. With academic courses taught by practitioners, professors leading industrial labs, global cloud computing resources becoming dominant, and more data becoming available, the academic-industrial boundaries are quickly getting blurred in the AI domain. No need to stick to any of the two sectors, just choose the project you are most excited about!
Connor Lee is a 2023 graduate from Saratoga High School in the Bay Area. He will be joining the Computer Science program at NYU in the fall. By all means, Connor will be one of the youngest KDD attendees ever!
Generative AI has seen a widespread adoption across multiple industries. However, when it comes to financial institutions, the adoption is relatively slow due to regulatory challenges and concerns. “For financial services providers, the risks and biases associated with large language models (LLMs) have even greater significance.
“As a regulated entity, we operate under various restrictions, particularly in terms of privacy. We have to adhere to strict data and consent frameworks to ensure the security and accuracy of the information we share. We don’t want to give out wrong information. So how we benefit from the generative model within our framework is something we are actively exploring,” Deepak Sharma, president & chief digital officer at Kotak Mahindra Bank told AIM. Kotak Mahindra is exploring all the options, from the GPT models by OpenAI to all the open-source LLMs available. “The emergence of Azure and other cloud platforms has brought forth a new wave of players actively exploring ways to deploy generative AI models within contained environments, utilising proprietary training datasets and enhancing them with LLM capabilities. This aligns closely with our current objectives and we are keen on leveraging such advancements to achieve our goals effectively.”
Sharma believes generative AI holds immense potential in various domains such as contact centres, code generation, conversational models, predictive modelling, and graphic visual designing. Generative AI “These are the specific areas where we are exploring the application of generative AI, rather than considering it solely as an extension of our existing AI capabilities,” he said.
Impact on jobs
The introduction of generative AI has sparked discussions around potential job losses, as is often the case with the emergence of new technologies. Similar concerns were raised when ATMs were first introduced, with widespread discussions about the potential impact on bank teller jobs. “I think we have always seen whenever technology has changed, some kind of jobs get eliminated and a new set of jobs gets created. “In any industry undergoing a transformation, it is natural for certain jobs to be phased out. We can anticipate a similar scenario with the integration of generative AI. However, it is important to note that alongside job eliminations, there is often an emergence of new and valued skill sets.”
As the landscape evolves, there will be a need for workers to upskill and repurpose themselves. “In today’s knowledge economy, this evolution is happening across various sectors, driven by factors such as hyper automation. Even prior to the discussions surrounding generative AI, hyper automation has played a significant role in reshaping job profiles and creating new opportunities.
“Hence, in my mind, generative AI will make humans far more efficient and productive. For example, I’m not saying that my conversational AI or generative AI will replace the contact centre agent, but it will make contact centre agents far more efficient in how he or she interacts with customer” Sharma concluded.
Banking & AI
Today, your banks are longer banks but technology companies. Over the last few years, banks in India have been investing heavily on technologies such as AI/ML to improve different aspects of banking operations from customer service to underwriting, risk management, and fraud detection. “We were the first bank in India to build an AI-powered voice base interface at our contact centre in 2019,” Sharma said.
He believes AI is set to revolutionise every facet of a financial institution’s operations. It is expected to have a pervasive influence, impacting all technologies presently employed within banks. AI will permeate various areas and processes within the banking industry, ushering in transformative changes.
It was during 2016-17 when Kotak Mahindra Bank shifted its focus to AI. Sharma revealed that some of the earlier AI models deployed by the banks were for propensity-based targeting of customers, enabling the bank to provide personalised offers and determine the next best offer for each individual.
“We have also integrated these models into our credit decisioning processes. Fortunately, banks possess abundant historical data that can be utilised to train these models, allowing us to build a robust real-time credit decisioning engine. Over the past six to seven years, as we have expanded our use cases, the application of AI and ML has become more diverse and sophisticated.”
Furthermore, Sharma highlights that the optimal utilisation of AI at Kotak Mahindra lies in building and reinforcing the credit and risk model. Additionally, Kotak Mahindra has also implemented AI solutions to combat fraud, bolster risk management practices, and fortify security measures.
The post Kotak Mahindra Bank is Gearing Up For a Generative AI-Powered Banking Future appeared first on Analytics India Magazine.
Its capability to dish out a massive bulk of fake information and the infringement of privacy that generative AI is notorious for has made the world rush towards framing rules around it. While the US, European Union, and even India is still juggling between possibilities, Chinese regulators have finalised unprecedented regulations concerning generative AI as part of their efforts to enhance supervision over this rapidly expanding technology.
The new regulations are making clear that existing Chinese laws on cybersecurity, data privacy, etc apply to generative AI as well. Moreover, the regulations also include that the company’s that are providing APIs for others to build models will be held responsible for misuse. Interestingly, this also includes research and development, and not just building AI products.
The Cyberspace Administration of China (CAC), an influential regulatory body, collaborated with various other regulators to develop these new rules, which will become effective on August 15. It would be interesting to see how companies work and build generative AI products following the implementation of these regulations.
The country has been discussing about rules governing generative AI for some time now. The draft regulations that came out in April received mixed reviews from around the world. A lot of people have been appreciating the strict policies against the use of copyright material for training generative AI applications.
China’s draft AI regulation: “providers of generative AI must ensure data used for training & optimization is obtained through legal means, & such data must…not contain content that infringes intellectual property.” While I may disagree with other provisions, China gets this https://t.co/Lh2jJhs6KN pic.twitter.com/m6cVkAu3gz
— neil turkewitz (@neilturkewitz) April 17, 2023
Made in China, Stays in China
OpenAI’s ChatGPT has been able to woo the world. The hype cycle led other companies around the world to work on something similar. Chinese tech giants also got excited about it and joined the bandwagon of generative AI. However, the strict control of the Chinese government, which exercises tight control over its domestic internet through censorship and regulation, has stifled the growth of the new AI technology.
Baidu and Alibaba, which introduced generative AI applications this year, have been engaging with regulators in recent months to ensure compliance with the regulations. These companies have been in contact with the authorities to ensure that their AI technologies adhere to the rules and do not violate any regulatory requirements.
Read: Great Firewall of China: Barrier to Baidu’s Global AI Aspirations
Chinese regulators are particularly worried about the possibility of these services generating content that contradicts Beijing’s perspectives or ideology. This is partly why Chinese tech firms have been cautious about launching their own versions of ChatGPT. Instead of providing widely accessible full services, these companies have concentrated their AI technology on enterprise and specific applications.
When it comes to competing with OpenAI, China’s strict regulation kills all the threat that the technology is capable of enforcing. The country’s overarching regulation has been restricting the launch of its technology on a global scale.
“It is the first time that China finds itself having to do a trade-off” between two Communist party goals of sustaining AI leadership and controlling information, said Matt Sheehan, a fellow at the Carnegie Endowment for International Peace.
Can AI be the global collaborator?
Sam Altman, the CEO of OpenAI, had earlier called China to help out in framing rules around generative AI. Even though ChatGPT is not currently available in China, the company aims to expand the company’s footing around the world.
“China has some of the best AI talent in the world and fundamentally, given the difficulties in solving alignment for advanced AI systems, this requires the best minds from around the world,” said Altman at an event at Beijing Academy of Artificial Intelligence.
Several governments and authorities are in a race to establish regulations to address the potential misuse of generative AI. The EU has put forward some of the most stringent rules globally, which has led to objections from companies and executives in the region. Meanwhile, discussions about AI governance and control are taking place in Washington, and the UK is initiating a review of the matter.
China’s rule seems even stronger and stricter than any other country in the world. Altman has already been trying to regulate AI competitors. Possibly, by learning from China, Altman can discuss rules to subdue its competitors, those which are already rising from across the globe.
Read: AI War Between US & China Intensifies
The post Why China is Killing its Generative AI Ecosystem appeared first on Analytics India Magazine.
After years of delaying Elon Musk’s brainchild Tesla is in talks with the Indian government for an investment proposal to set up a car factory in the subcontinent, with a yearly capacity of up to 500,000 electric vehicles. The starting price of the vehicles will be 2 million rupees ($24,400.66), the Times of India reported.
The company has been eyeing the Indian economy for years but the plans were hunky dory since it failed to get special incentives from the GOI for importing cars at a lower duty. In return the government has been adamantly demanding the Musk led firm to manufacture cars locally instead of importing from other countries —- like China. But then the company wanted to export its cars first to test the demand strength. Tesla plans to export cars from India to test demand in the Indo-Pacific region, according to government sources cited in a report by The Times of India.
The turn around in the company’s plans to invest in the country comes a month after Musk met with the Indian Prime Minister Narendra Modi. I am confident Tesla will be in India and we will do so as soon as humanly possible,” said Musk post the visit. In the meeting, Modi pushed the car maker to make a “significant investment” in the country.
In the latest efforts to enter the domestic market, the AI company held discussions in May with government officials about incentives being offered by the government, Reuters reported.
Using India as an export base allows Tesla to assess market potential and consumer preferences before expanding locally. It also takes advantage of India’s growing economy and focuses on a sustainable energy future including stationary battery packs and electric vehicles. After meeting Modi, Musk also hopes to bring SpaceX’s Starlink satellite internet service to the country.
The post Tesla Plans To Set Up Factory In India, A Month After Musk Met PM Modi appeared first on Analytics India Magazine.
Google’s Bard chatbot finally launches in the EU, now supports more than 40 languages Kyle Wiggers Paul Sawers 7 hours
Google is making its ChatGPT rival Bard available to a wider audience today, launching the generative AI chatbot in more than 40 languages and finally bringing it to the European Union (EU) after an initial delay due to data privacy concerns.
The internet giant also introduced a swathe of new features to Bard, though some are only available in English at first.
Google first teased Bard back in February in what was seemingly a rushed response to the snowballing success of ChatGPT, a super smart search engine / chatbot that leans on large language models (LLMs) to generate fresh content from simple prompts. ChatGPT is the handiwork of OpenAI, and AI company with heavy backing from Google rival Microsoft.
While Bard initially opened for early access in English starting in the U.S. and U.K. back in March, the initial waitlist ended in May with a global rollout spanning some 180 countries and with additional support for Japanese and Korean. A notable omission thus far, however, has been the EU, with Google delaying the EU launch after a privacy regulator voiced concerns. The Irish Data Protection Commission (DPC), which governs data protection in the EU region when companies use Ireland as their European HQ, said that while Google had informed the DPC of its intentions to launch Bard in the EU, it hadn’t provided the regulator with enough information to address its data privacy concerns.
With today’s launch, it seems that Google has now given the DPC what it was looking for.
“We’ve proactively engaged with experts, policymakers and privacy regulators on this expansion,” Bard product lead Jack Krawczyk, and VP of engineering Amarnag Subramanya, wrote in a blog post.
Indeed, Google has touted its latest update as its “biggest expansion to date,” rolling out across most of the world with support for Arabic, Spanish, Chinese, German, and Hindi. And in addition to the EU, Bard is also now available in Brazil.
Fine-tuning
Coinciding with the expansion are new features focused on fine-tuning Bard’s responses and beefing up the chatbot’s potential for productivity. Some were telegraphed and previewed in early May, but today marks their broad rollout.
Now, users can change the tone and style of Bard’s responses with five different options: “simple,” “long,” “short,” “professional” or “casual.” Available in English to start, the toggle takes Bard’s default responses to a prompt and adjusts them to align with whichever tone and style the user selects.
Elsewhere, Bard can now vocalize its responses thanks to a new text-to-speech AI feature. Supporting over 40 languages, the chatbot’s audible responses can be accessed by clicking the new sound icon next to a prompt.
On the productivity side, Bard can now export code to more places — specifically Python code to Replit. the browser-based integrated development environment. Images can be used in prompts — users can upload images with prompts (only in English for now) and Bard will analyze the photo. New options allow users to pin, rename and pick up recent conversations with Bard. And Bard’s responses can now more easily be shared with the outside world through links.
“Curiosity and imagination are the driving forces behind human creativity,” Krawczyk and Subramanya wrote. “That’s why we created Bard: to help you explore that curiosity, augment your imagination and ultimately get your ideas off the ground — not just by answering your questions, but by helping you build on them.”
Google struggled mightily with Bard early in the chatbot’s life cycle, failing to match the quality of responses from rival bots such as ChatGPT. It gave factually incorrect answers complete with made-up citations, leading even Google employees to label the chatbot “worse than useless” and a “pathological liar.” (The company’s stock briefly tanked 8% at Bard’s launch.)
But Google claims that Bard is improving in measurable ways, particularly in areas like math and programming. It’s also gained extensions, including from Google’s own apps and services as well as third-party partners like Adobe, and the ability to explain code; structure data in a table; and surface images in its responses.
In another bad look for Google, though, reporting this week from Bloomberg revealed that the humans who train Bard are often overworked and underpaid. Some contractors make as little as $14 per hour, receive minimal training and are expected to complete complex audits of Bard in minutes.
Bloomberg’s story follows an Insider piece in April that found that Bard-testing contractors weren’t given enough time to corroborate and check the chatbot’s most accurate answer. From the looks of it, that hasn’t changed.
Programming languages are constantly evolving with a life cycle that entails: popularity, growth and decline. The reasons behind their decline vary from outdated principles to new more efficient languages gaining popularity. Here are 10 languages that once enjoyed popularity in their prime but were lost into oblivion in the 21st century.
COBOL
In 1960, the CODASYL organisation played a significant role in the development of COBOL, a programming language influenced by the division between business and scientific computing. During that time, high-level languages in the industry were either used for engineering calculations or data management. COBOL, considered one of the four foundational programming languages along with ALGOL, FORTRAN, and LISP, was once the most widely used language worldwide. It continues to operate many of our legacy business systems.
Cause of Death: Two factors contributed to COBOL’s decline. Firstly, it had minimal connections with other programming language efforts. Very few developers built upon COBOL, leading to the scarcity of its influence in second or third generation languages, which benefited from lessons learned from their predecessors. COBOL is exceptionally intricate, even by today’s standards. Consequently, COBOL compilers fell behind those of contemporaneous microcomputers and minicomputers, providing opportunities for other languages to thrive and eventually surpass itself.
ALGOL
In 1960, the ALGOL committee aimed to create a language for algorithm research, with ALGOL-58 preceding and quickly being replaced by ALGOL-60. Despite being relatively lesser known today compared to LISP, COBOL, and FORTRAN, ALGOL holds significant importance, second only to LISP, among the four original programming languages. It contributed to lexical scoping, structured programming, nested functions, formal language specifications, call-by-name semantics, BNF grammars, and block comments.
Cause of Death: ALGOL was primarily a research language, not intended for commercial use. Its specification lacked input/output capabilities, making practical application difficult. As a result, numerous ALGOL-like languages emerged in the 1960s and 1970s. Subsequent languages were based on these extensions rather than ALGOL itself. During the 1960s and 1970s, numerous ALGOL-like languages emerged as people extended ALGOL with input/output capabilities and additional data structures. Examples of such languages include JOVIAL, SIMULA, CLU, and CPL. The descendants of ALGOL ultimately overshadowed and outpaced it in popularity and usage.
APL
APL was created by Ken Iverson in 1962. Originally developed as a hand-written notation for array mathematics, IBM adopted it as a programming language. APL focused on array processing, enabling concise manipulation of large blocks of numbers.It gained popularity on mainframe computers due to its ability to run with minimal memory requirements.
APL revolutionised array processing by introducing the concept of operating on entire arrays at once. Its influence extends to modern data science and related fields, with its innovations inspiring the development of languages like R, NumPy, pandas, and Matlab. APL also has direct descendants such as J, Dyalog, K, and Q, which, although less successful, still find extensive use in the finance sector.
Cause of Death: APL faced challenges due to keyboard limitations. The language’s non-ASCII symbols made it difficult for widespread adoption. Ken Iverson addressed this issue in 1990 with J, which utilised digraphs instead of distinct symbols. However, this change came relatively late and did not gain significant traction in popularising a radically different programming style. Another challenge was APL’s limitation to homogeneous data, as it did not support storing both strings and numbers in the same data structure. Working with strings was also cumbersome in APL. These limitations, including the absence of dataframes, hindered APL’s suitability for modern data science applications.
BASIC
Created by John Kemeny in 1964, BASIC originated as a simplified FORTRAN-like language intended to make computer programming accessible to non-engineering individuals. BASIC could be compactly compiled into as little as 2 kilobytes of memory and became the lingua franca for early-stage programmers. It was commonly used by individuals programming at home in the 1970s.
Its major technical impact lay in its runtime interpretation. It was the first language to feature a real-time interpreter, beating APL by a year.
Cause of Death: BASIC faced the perception of being a “lesser” language compared to other programming languages used by professional programmers. While it continued to be used by children and small business owners, it was not considered the language of choice for experienced programmers. As microcomputers with larger memory capacities became available, BASIC was gradually replaced by languages like Pascal and C. BASIC persisted for some time as a legacy teaching language for kids but eventually faded away from that niche as well.
PL/I
Developed by IBM in 1966, PL/I aimed to create a language suitable for both engineering and business purposes. IBM’s business was previously divided between FORTRAN for scientists and COMTRAN for business users. PL/I merged the features of these two languages, resulting in a language that supported a wide range of applications.
PL/I implemented structured data as a type, which was a novel concept at the time. It was the first high-level language to incorporate pointers for direct memory manipulation, constants, and function overloading. Many of these ideas influenced subsequent programming languages, including C, which borrowed from both BCPL and PL/I. Notably, PL/I’s comment syntax is also used in C.
Cause of Death: PL/I faced challenges as it tried to straddle the line between FORTRAN and COBOL. Many FORTRAN programmers considered it too similar to COBOL, while COBOL programmers saw it as too similar to FORTRAN. IBM’s attempt to compete with two established languages using a more complex language deterred wider adoption. Moreover, IBM held the sole compiler for PL/I, leading to mistrust from potential users concerned about vendor lock-in. By the time IBM addressed these issues, the computing world had already transitioned to the microcomputer era, where BASIC outpaced PL/I.
SIMULA 67
Ole Dahl and Kristen Nygaard developed SIMULA 67 in 1967 as an extension of ALGOL for simulations. SIMULA 67, although not the first object-oriented programming (OOP) language, introduced proper objects and laid the groundwork for future developments. It popularised concepts such as class/object separation, subclassing, virtual methods, and protected attributes.
Cause of Death: SIMULA faced performance challenges, being too slow for large-scale use. Its speed was particularly limited to mainframe computers, posing difficulties for broader adoption. It’s worth noting that Smalltalk-80, which extended SIMULA’s ideas further, had the advantage of Moore’s Law advancements over the extra 13 years. Even Smalltalk was often criticised for its speed. As a result, the ideas from SIMULA were integrated into faster and simpler languages by other developers, and those languages gained wider popularity.
Pascal
Niklaus Wirth created Pascal in 1970 to capture the essence of ALGOL-60 after ALGOL-68 became too complex. Pascal gained prominence as an introductory language in computer science and became the second most popular language on Usenet job boards in the early 1980s.
Pascal popularised ALGOL syntax outside academia, leading to ALGOL’s assignment syntax, “:=”, being called “Pascal style”.
Cause of Death: The decline of Pascal is complex and does not have a clear-cut explanation like some other languages. While some attribute its decline to Edsger Dijkstra’s essay ‘Why Pascal is not my favourite language’, this explanation oversimplifies the situation. Pascal did face competition from languages like C, but it managed to hold its own for a significant period. It’s worth noting that Delphi, a variant of Pascal, still ranks well in TIOBE and PYPA measurements, indicating that it continues to exist in certain niches.
CLU
CLU was developed by Barbara Liskov in 1975, with the primary intention of exploring abstract data types. Despite being relatively unknown, CLU is one of the most influential languages in terms of ideas and concepts. CLU introduced several concepts that are widely used today, including iterators, abstract data types, generics, and checked exceptions. Although these ideas might not be directly attributed to CLU due to differences in terminology, their origin can be traced back to CLU’s influence. Many subsequent language specifications referenced CLU in their development.
Cause of Death: CLU served as a demonstration language with Liskov’s primary goal being the adoption of her ideas rather than the language itself. This objective was largely achieved, as nearly all modern programming languages incorporate elements inspired by CLU.
SML
Robin Milner developed ML in 1976 while working on the LCF Prover, one of the first proof assistants. Initially designed as a metalanguage for writing proofs in a sound mathematical format, ML eventually evolved into a standalone programming language.
It is considered one of the oldest “algebraic programming languages”. ML’s most notable innovation was type inference, allowing the compiler to deduce types automatically, freeing programmers from explicitly specifying them. This advancement paved the way for the adoption of typed functional programming in real-world applications.
Cause of Death: ML initially served as a specialised language for theorem provers, limiting its broader usage. While SML emerged in the same year as Haskell, which exemplified a more “pure” typed functional programming language, the wider programming community paid more attention to Haskell. ML’s impact and adoption remained substantial within academic and research settings but did not achieve the same level of popularity as some other languages.
Smalltalk
Smalltalk, developed by Alan Kay, had multiple versions released over time. Each version built upon the previous one, with Smalltalk-80 being the most widely adopted and influential. It is often regarded as the language that popularised the concept of object-oriented programming (OOP). While not the first language with objects, Smalltalk was the first language where everything, including booleans, was treated as an object. Its influence can be seen in the design of subsequent OOP languages, such as Java and Python.
Cause of Death: Smalltalk faced challenges related to interoperability with other tools and runtime performance. Its difficulty in integrating with existing systems and relatively slower execution speed hindered broader adoption. Its decline can be attributed to the emergence of Java, which had a more seamless interop with existing systems and gained overwhelming popularity. The legacy of Smalltalk lives on in the principles and design patterns that have become integral to modern software development.
The post Obsolete Code: 10 Programming Languages That Vanished Over Time appeared first on Analytics India Magazine.
Google-backed Anthropic, an AI lab based in San Francisco, has unveiled Claude 2, a publicly accessible alternative to GPT-4. Previously, Claude, the earlier iteration, was exclusively offered to enterprises, but the latest version is now open to the general public in the United States and the United Kingdom. Distinguishing itself from its predecessor, Claude 2 is accessible through both a beta website and an API.
The timing couldn’t have been better. Claude-2 comes at a time when the popularity of GPT has seen a decline in recent months. Users are seeking alternatives that offer superior performance and affordability. Claude-2 appears to fit the bill, with its enhanced capabilities and cost-effectiveness.
Learning from Google’s Bard and OpenAI’s ChatGPT and taking user feedback into account, Anthropic has made significant enhancements to Claude-2. Users on Twitter have been lauding Claude’s ability to engage in natural language conversations, clearly explain its reasoning, and produce less harmful outputs. Claude-2 builds on these strengths and adds several key features that elevate its performance to new heights.
One notable improvement is Claude-2’s enhanced coding, maths, and reasoning skills. This includes reading PDFs, something that GPT-based models still struggle with. This is exactly the time around when OpenAI has introduced Code Interpreter on its paid models.
The best part? 100K context, so it can "see" the entire file Oh yah and its free This is arguable better than GPT4-32K because Its cheaper and has a larger context window
— Sully (@SullyOmarr) July 11, 2023
Let’s Evaluate
Anthropic has put considerable effort into fine-tuning the model. According to the model card of Claude-2, the model is built using unsupervised learning and reinforcement learning with human feedback (RLHF), similar to what OpenAI used for GPT. Moreover, the model is trained with data till early 2023, but does not access the internet.
Claude-2 now boasts an impressive 71.2% score on the Codex HumanEval, a Python coding test, up from 56.0% achieved by its predecessor, Claude-1.3. This is compared to 67% of GPT-4. Claude-2 wins.
Similarly, on the GSM8k maths problem set, Claude-2 scored 88%, an improvement from Claude-1.3’s score of 85.2%. These advancements position Claude-2 as a valuable asset for developers and individuals seeking assistance with technical challenges. GPT-4 wins here with 92% score.
Claude-2, Anthropic's shot at GPT-4, has arrived. It's cheaper than GPT-4 and far stronger in reasoning & coding than its older self. Things you should know: ▸ On standard exams, it's not quite at GPT-4 yet but catching up fast compared to v1.3. Winner in bracket: GRE verbal:… pic.twitter.com/k8Vblc1BDg
— Jim Fan (@DrJimFan) July 11, 2023
The most important aspect is the expansion of Claude-2’s input and output capabilities. Users can now input up to 100,000 tokens per prompt, compared to 32,000 of GPT-4, allowing Claude-2 to process extensive technical documentation or even entire books. Additionally, Claude-2 can generate longer documents, ranging from memos to letters to stories, up to a few thousand tokens in length.
This is also 4-5 times cheaper than GPT-4-32K which costs $1.96 per token. Prompt tokens cost $11 per million token vs $60 million for GPT, and completion costs $32 vs $120/M, assuming similar tokenisation length. This will definitely push a lot of users to start using Claude-2 instead of GPT-4.
Read: Busting the Myth of Context Length
Price drop and availability
Anthropic has made Claude-2 available through multiple channels. Users can access Claude-2 via the API, allowing businesses to integrate it into their systems seamlessly. Remarkably, Anthropic has maintained the same pricing for the Claude-2 API as its predecessor, Claude-1.3, making the upgrade to the latest model even more appealing to budget-conscious users.
If Claude 2 turns out to be as strong as GPT-4, thereby breaking the OpenAI monopoly on strong LMing, the number of companies building products on top of LMs will increase substantially. pic.twitter.com/prfWj3jlSu
— Ofir Press (@OfirPress) July 12, 2023
Partners like Jasper, a generative AI platform, have reported Claude-2’s strength in a wide range of use cases, particularly those involving extended content generation. With a 3X larger context window and improved semantics, Claude-2 has empowered Jasper’s customers to stay ahead of the curve and achieve their content strategy goals.
Another notable collaboration involves Sourcegraph, a code AI platform that assists developers in writing, fixing, and maintaining code. Sourcegraph’s coding assistant, Cody, leverages Claude-2’s improved reasoning and access to a larger context window of up to 100,000 tokens. By providing accurate answers and incorporating codebase context, Cody assists developers in speeding up their workflow and staying up to date with the latest frameworks and libraries.
Safe but still hallucinatory
According to Anthropic, the model has undergone rigorous evaluation, including internal red-teaming and automated tests on harmful prompts. In these evaluations, Claude-2 demonstrated a twofold improvement in providing harmless responses compared to Claude-1.3. While no model is completely immune to misuse, Anthropic accepts that.
“For example, Claude models could support a lawyer but should not be used instead of one, and any work should still be reviewed by a human,” reads the paper. People on Twitter have been already pointing out that the claims of being good at maths are overstated.
It seems that Claude 2 is as bad at math as GPT4, maybe even slightly worse. Here's one example interaction: it makes a massive and fairly obvious error in screenshots 1) and 2), and in 3) it isn't aware of an important (openly available) paper published in 2015 pic.twitter.com/2OKojeE3qC
— Nonagon (@nonagonono) July 11, 2023
Anthropic acknowledges the evolving nature of AI and is committed to responsible deployment. Claude-2 is poised to become a trusted companion for individuals and a valuable tool for businesses.
As users seek alternatives to declining ChatGPT usage, Claude-2’s budget-friendly offering and remarkable feature set make it an enticing option. Seems as though, a true competitor for OpenAI is finally here and might finally make the company drop its prices and come to the ground to compete.
The post Claude-2 vs GPT-4 appeared first on Analytics India Magazine.
Adobe recently announced that Adobe Firefly now supports text prompts in eight Indic languages- Gujarati, Hindi, Malayalam, Marathi, Punjabi, Tamil, Telugu and Nepali.
Overall, it supports over prompts in over 100 languages enabling users across the world to generate images and text effects using their native languages in the standalone Firefly web service.
The service will also be localised in 20 languages with versions in French, German, Japanese, Spanish and Brazilian Portuguese available now.
“Today’s announcement is about making Firefly accessible to more people in their preferred languages, so they can continue to leverage our unique model to bring their imagination to life, and create the highest quality assets that are safe for commercial use.”
“We’ve been amazed at how creators have been using Firefly to create more than a billion gorgeous images and text effects making it one of Adobe’s most successful betas ever in just over three months,” Ely Greenfield, CTO, Digital Media at Adobe, said.
Upon its launch, Adobe Firefly faced significant criticism for lagging behind its competitors at the time, namely Midjourney and Stable Diffusion. Moreover, Adobe’s timing with the Firefly release was deemed untimely, as it completely missed the text-to-image hype wave.
Nevertheless, Adobe managed to revolutionise the industry by seamlessly integrating Firefly’s capabilities into its renowned suite of design software.
This integration allowed creative professionals to effortlessly harness the power of generative AI within their workflow. The inclusion of Firefly in Adobe’s software went beyond mere novelty, as designers discovered practical and valuable use cases for generative AI in their work.
Since its launch, Firefly has generated over one billion images and is now integrated directly into Photoshop, Express and Illustrator, with workflows marrying generative AI’s speed and ease with Creative Cloud’s power and precision, according to Adobe.
The post Adobe Firefly Now Support Prompts in Eight Indic Languages appeared first on Analytics India Magazine.
You can still find some of the best robot vacuums on sale for Prime Day.
Our lives are busy. When we have limited time available to keep our homes clean and tidy, it isn't long until the clutter builds up and a molehill has turned into a mountain.
This is where modern home appliances shine. Intelligent thermostats can automatically manage our energy consumption and heating requirements; smart lighting can be scheduled, and when it comes to cleaning, robot vacuums can take some of the daily workload off your plate.
Also: Best Prime Day deals still available
Robot vacuums aren't the holy grail of domestic tasks, of course, but if you purchase the right model, you won't need to worry about keeping your floors swept and mopped. You can schedule them to perform these jobs for you — or to spot clean as and when you need — freeing up a little more time for you to spend how you like.
Below are the best deals we could find on robot vacuums during Amazon Prime Day. But hurry: The discounts will end very soon.