A Decade of React.js Ecosystem: Top 10 React Libraries 

Facebook had a huge user base back in 2011 making it a daunting task for the creators to make the experience seamless for its users. The objective was clear — to develop a user interface that was not only dynamic and responsive but also blazingly fast and exceptionally high-performing.

React played a key role in achieving this back then. Brought to life by Jordan Walke, one of Facebook’s software engineers, React revolutionised the development process, offering a streamlined and structured approach to constructing dynamic and interactive user interfaces. This innovative framework empowered developers by providing reusable components, simplifying the creation of rich and engaging user experiences.

However, React is not just like any other framework. It comes with a set of libraries that has tools which developers can put into use for different purposes like animation, creating forms or creative visually appealing user interfaces that improves the user experience. These libraries come equipped with pre-built components such as form inputs, pagination elements, menus, buttons, icon sets, and time/date pickers, making it easier and faster to implement these essential features in React frameworks.

As React celebrates its 10th anniversary in 2023, we present to you the top 10 React libraries that are widely used and trusted by developers.

Redux

Redux Toolkit is a widely used library for managing state in React applications. It offers a collection of tools and best practices that help developers handle state efficiently. With Redux Toolkit, you can define and update state using a simplified interface. It also provides useful features like immutable updates, which ensure the integrity of your state, and supports serializable action types for better control.

SWR

SWR is a well-known library used for managing server state in React applications. The name stands for stale-while-revalidate, which is a cache invalidation strategy widely recognized in the HTTP RFC 5861.

Next.js

Next.js offers a comprehensive range of powerful features, such as automatic code splitting, server-side rendering, and static site generation, among others. It is particularly well-suited for developing intricate applications that demand server-side rendering and SEO optimization. To begin working with Next.js, the simplest approach is to utilize create-next-app.

Tailwind CSS

Tailwind CSS is a CSS framework that prioritises utility-first development, allowing for the quick creation of custom user interfaces. It is a flexible and customisable framework that provides essential building blocks for designing unique interfaces. Unlike some other frameworks, Tailwind CSS avoids imposing opinionated styles that can be difficult to override, granting developers greater freedom in their design choices.

Sentry

Sentry is a comprehensive error-tracking framework that simplifies the process by handling the framework, back end, and visualization console for you. It offers seamless integration into your JavaScript codebase with minimal effort required.

React Hook Form

In 2023, React Hook Form emerged as the recommended choice for managing forms. This library is known for its lightweight nature, speedy performance, and user-friendly approach. React Hook Form simplifies form handling and validation. It offers a versatile API that facilitates the construction of forms, along with seamless integration with popular validation libraries like Yup and Zod.

React Spring

​​Animations can greatly enhance user interfaces, and React provides several popular animation libraries like React Spring. It simplifies the process of generating seamless and responsive animations, requiring minimal code to achieve impressive results.

React Testing Library and Cypress

When it comes to testing React applications, two excellent options to consider are React Testing Library for unit testing and Cypress for end-to-end testing.

React Testing Library is a JavaScript testing tool specifically designed for testing React components. In automated tests, where there is no physical DOM to work with, React Testing Library comes to the rescue by offering a virtual DOM. This virtual DOM enables us to interact with and validate the behavior of React components, ensuring their functionality is thoroughly tested.

Cypress is a dependable and powerful solution for end-to-end testing of React applications. It allows you to create tests that replicate real user interactions with your application, such as clicking buttons, entering text via keyboard, and submitting forms. With Cypress, you can comprehensively test your React application’s functionality and ensure a smooth user experience.

React Router

React Router stands out as a highly favored routing library for React applications. It offers a simple and expressive approach to managing routing, allowing developers to define routes and dynamically render components based on the current URL. With React Router, handling navigation becomes more straightforward and declarative in React applications.

Recharts

Recharts is renowned as a reliable and widely used React chart library among professionals and web developers. The components offered by Recharts are primarily designed for presentation purposes, aligning with their declarative nature.

Reputed professionals often recommend Recharts as the top choice for those seeking a straightforward and simplified approach to accomplish their data visualization projects.

The post A Decade of React.js Ecosystem: Top 10 React Libraries appeared first on Analytics India Magazine.

How Tech Mahindra is Driving Customer Excellence with Gen AI

Generative AI has been a disruptive force and the Indian IT sector has been quick to recognise its potential. Leading IT companies in India are swiftly harnessing the power of generative AI and delivering its advantages to their clients. In April, Tech Mahindra became the first IT giant to launch something like a Generative AI Studio. Tech biggies, including Tech Mahindra, are making substantial investments in training their workforce to become proficient in this transformative technology.

“Generative AI is bringing a renaissance in the creator economy. Until now, AI has demonstrated its capabilities in observing, detecting, recommending, and predicting. Generative AI can create as well, thus making AI further disruptive. Looking at its tremendous potential, we are focused on investing in, and implementing this technology effectively,” Hasit Trivedi, CTO, digital technologies, and global head – AI at Tech Mahindra, told AIM

In an exclusive interaction, he revealed that the IT giant was one of the early adopters of generative AI. Way before the launch of ChatGPT, the company developed the Storicool platform, an auto content creation tool, which was beyond its years, according to chief executive CP Gurnani. Tech Mahindra is also building an Indic large language model (LLM), something which could be seen as a competitor to OpenAI’s GPT models that power the popular chatbot, ChatGPT.

Generative AI Studio

Earlier this year, Tech Mahindra launched its Generative AI Studio under its amplifAI0->∞ suite of AI offerings and solutions this year. “We were the first to launch a generative AI studio for experimentation and also use it internally,” Trivedi said. The studio is a one-stop centre to experiment, generate and optimise digital content. “It will help enterprises make the right decisions, identify the right technology in the space of generative AI, and aid them in achieving the next level of digitised efficiency. The studio enables enterprises to experiment with different tools, do the first level of proof of technology, and also run a small pilot with the selected technology for further experimentation.”

The studio will consolidate six aspects of content generation — code, document/text, image, video, audio, and data — under one umbrella, empowering enterprises to discover and explore the wide-ranging possibilities of generative AI. “The Generative AI Studio also provides enterprises with a user-friendly interface and a range of features to customise their content. Users can select their preferred content type, customise their options, and then let the studio handle the rest of the process.”

Moreover, the studio has codified prompt engineering in a form that reduces randomness in responses, and the standardisation approach was adopted to generate content, taking advantage of NLP – which is extremely beneficial for enterprises.”

Being the first to launch a generative AI studio, Tech Mahindra is also leveraging the technology internally for application development, testing, migration, legacy modernisation, transition, documentation and so on. “As a part of BPS business, we are using it in service desk, customer servicing, reporting and reconciliation, etc. In our internal functions, we are using it in legal, CIO, HR, and marketing functions. In an AI lifecycle, we are using it for synthetic data generation and annotation.”

Furthermore, Trivedi revealed that Tech Mahindra is effectively adopting generative AI in its mainstream AI projects through Generative AI studio, TechM XaaS (Xperiment as a service), Evangelise Pair Programming, Enterprise Knowledge Search, and other cutting-edge solutions that are powered by generative AI. “We are infusing it into most of our technology services, including cyber security, metaverse, business excellence, intelligent automation, and advanced analytics elements to increase efficiency and productivity.”

Bringing gen AI capabilities to customers

Currently, Tech Mahindra is already implementing generative AI use cases for many customers, notably a leading multinational oil & gas transport pipeline company. “We have automated the generation of transcription of each discharge summary for a healthcare player, thereby helping to reduce efforts and turnaround time from days to a few hours.”

Moreover, Tech Mahindra’s generative AI offering is helping a global technology distributor in providing reports using machine learning techniques like NLP and NLG / NLU to generate text. “The solution ingests raw data to generate text for the outcome analysis report, and it provides reports which have analysis, observation, analytics with visualisation, business insight, and recommended actions.”

As a result, they have seen a drastic reduction in time taken to process documents, curb costs, reduce dependency on domain experts, and curate a consolidated view of the data extracted. Furthermore, Trivedi revealed that the IT giant is also keeping an eye on acquiring products that enable them to build solutions involving advanced generative AI capabilities. “We will continue to foster new partnerships, especially with startups or products with niche capabilities that enable us to provide the right solution to our clients.”

Ethical and responsible use of gen AI

In light of the extensive uptake of generative AI by enterprises, it is crucial to conduct a thorough evaluation of the associated risks. These include concerns related to cybersecurity, data protection, copyright issues, as well as ethical and legal implications. Besides, till date, hallucination still remains a problem with LLMs. However, Trivedi believes hallucinations are reasonably manageable using techniques available, especially when dealing with enterprise data.

“Our Responsible AI framework allows clients to address these issues through multiple mechanics like process, tooling, techniques, awareness, education, and involvement of cross-functional teams. There are platform/product specific capabilities which need to be leveraged in the right manner. At times there are limitations in the said product/platform, which first need to be known and then addressed by appropriate mechanics,” Trivedi said.

Moreover, as part of the “Responsible AI” drive, Tech Mahindra has instituted many actions, like ensuring that the employees are trained and aware of the responsible AI aspect, putting the right governance framework to govern AI projects, among other things. “We also encourage human-in-loop approach in critical AI-led inferences and actions and always ensure a structured, comprehensive and responsible AI maturity assessment for clients. Thus, enterprises across industries can leverage Tech Mahindra’s comprehensive and customisable on-premises and cloud-based solutions with the assurance of harnessing the remarkable potential of generative AI and driving impactful business outcomes.”

The post How Tech Mahindra is Driving Customer Excellence with Gen AI appeared first on Analytics India Magazine.

Busting the Myth of Context Length

Now that we have smaller models such as LLaMA and Falcon that are performing similar to GPT-4 or PaLM in certain cases, the conversation has shifted from increasing the number of parameters to the number of context tokens or context length in these models.

In essence, context length is the ability of an LLM to respond to a prompt perfectly, as it needs clarity of the entire context that the question has been put in.

Often, people have this notion that when the input word count is more, the output would eventually be perfect. But, in reality, that is not the case. Say, you input an article of 2000 words on ChatGPT, it starts to make sense of it till it reaches a 700-800 word mark, then starts hallucinating. That’s the truth.

This is pretty much similar to how memory or short term memory works for humans. But is it really the case that context length is all that matters?

Attention is indeed all you need

Take listening to a story or watching a movie, for example. In most cases, the introductory part and the ending is what the audience remembers the most and the part in the middle often has the least recall value. Jim Fan from NVIDIA AI and a Stanford PhD holder explains that this is exactly what LLMs are going through.

In his tweet, taking basis from the recent paper from Stanford researchers — Lost in the Middle: How Language Models Use Long Contexts — Fan explains how claims of a million or billion tokens are not helpful when it comes to improving LLMs. “What truly matters is how well the model actually uses the context. It’s easy to make seemingly wild claims, but much harder to solve real problems better,” he said.

The paper explains how models are good at retaining the information present in the beginning and the end of the context, but not what is present in the middle. This is the same with all LLMs that are being currently developed including GPT, PaLM, or Flan-T5.

Moreover, models that have a natively longer context also fail to actually use the context better. In the paper, the researchers demonstrate how both the versions of GPT-3.5, one with 4k and the other with 16k context length, demonstrate similar results and the performance decreases as the context grows longer.

Ahmed M from Computer Research Institute of Montreal adds that this might possibly be because of the training examples and the issue with input data. Most of these models are trained on internet data with pages such as news articles that have the most important information at the beginning and at the end. This results in the output of LLMs also presenting the same architecture.

Stupidity like humans

Ever since Transformers was introduced with the Attention is All You Need paper, context length has been discussed excessively in every LLM release. It has always been believed that increasing the sequence length would improve the accuracy of the models. But indeed, just like humans forget half the story midway, LLMs are showcasing similar capabilities or possibly inabilities.

One thing that is certain is that in the push for making chatbots as smart as humans, we are definitely able to make them as dumb as humans. Maybe that is all we need even if we don’t want to. The similarity between human brains and Transformers is astonishing.

In discussions on HackerNews, Reddit, and Twitter on the same topic, users shared how increasing the number of tokens is becoming laughable at this point. “I’ve noticed this with GPT-4. It’ll ignore some part of its context, and when I point it out, it knows, so it’s clearly still in its context, but it didn’t know it has to look it up for a particular answer. We also have the same problem with memory, so I empathise.”

Moreover, if LLM providers through APIs are charging dollars per token, increasing the number of context tokens just to earn more money only makes more sense for them. More research would definitely prove if it does make sense to add more context tokens.

The giant costs of tokenisation in transformers makes one question if the money will eventually even be worth it. Anthropic’s Claude, which has the highest token count of 100k, will possibly be very costly if we take for example, GPT-4’s 32k context length cost of USD 1.96 per token.

For now, LLMs, like us humans, have a curious habit of remembering the story’s beginning and end with flair while casually dismissing the messy middle part. These models exhibit a common tendency — the longer the context, the higher the likelihood of their stumbling. It’s almost as if they suffer from a case of “attention deficit context disorder”.

The post Busting the Myth of Context Length appeared first on Analytics India Magazine.

Elon Musk launches new AI company to ‘understand the true nature of the universe’

Elon Musk photo

Musk and the xAI team will answer questions about the new company via a Twitter Spaces chat on Friday, July 14.

For months, Elon Musk has been leaving a trail of breadcrumbs pointing toward a bigger AI project on the horizon and today that project was formally announced.

xAI is a new company, led by Musk, that seeks to "understand the true nature of the universe," according to the release posted on the company's website.

Also: Anthropic's updated ChatGPT-rival offers more detailed, less offensive responses

The release shared the names of a dozen xAI team members, including former employees of DeepMind, OpenAI, Google Research, Microsoft Research, Tesla, and the University of Toronto. The list also includes engineers Igor Babuschkin and Manuel Kroiss, who Musk hired in early March.

xAI is a separate company from X Corp, the company Twitter merged into in April. However, the release notes that the new AI company will work closely with Twitter and Tesla, among other companies, "to make progress towards our mission."

Today's announcement follows a series of moves that suggested a major AI project in the making, including state filings in April which revealed that Musk had created a new AI company.

In April, Musk also shared his intention to create a new AI model called TruthGPT. In an interview with Tucker Carlson, Musk described the new model as a "maximum truth-seeking AI that tries to understand the nature of the universe," wording that mirrored the press release's description of xAI.

Musk also purchased 10,000 GPUs for Twitter, which are crucial for machine-learning models and, again, demonstrated the existence of a project like xAI.

Despite all of Musk's work toward building this company, the announcement came at an interesting time as Musk signed a petition to halt further AI developments at the end of March and has spoken openly about the dangers of AI.

Also: Want to build your own AI chatbot? Say hello to open-source HuggingChat

This isn't Musk's first AI endeavor; he was an investor in OpenAI when it was founded in 2015. Now xAI is positioning itself to compete directly with OpenAI as well as other giants in the industry.

According to the xAI site, Musk and the team will answer questions about the new company via a Twitter Spaces chat on Friday, July 14.

Artificial Intelligence

IBM Launches New WatsonX Foundation Models for Enterprise

A sign with the Watson and IBM logos.
Image: MichaelVi/Adobe Stock

IBM this week launched an AI platform that gives generative AI customers an option to stay within its ecosystem. Called watsonx, the generative AI foundation model, now generally available after a two-month beta, is designed for enterprises to build, tune, deploy and manage foundation models for talent acquisition, customer care, IT operations and application modernization.

It also gives the company a competitive position when compared to Amazon SageMaker Studio, Google Vertex AI, Microsoft Azure AI and Anthropic’s Claude large language model.

In May 2023, IBM first previewed and opened a waitlist for watsonx. Because it’s a foundation model, a form of generative AI that trained on terabytes of unstructured data, watsonx doesn’t need to be repeatedly trained on new data sets for each new function to which it’s assigned — it can be transferred to any number of functions and tasks with minor tuning. The evolving versions of ChatGPT show how foundation models can be used to build conversational large language models.

SEE: Check out this cheat sheet on GPT-4 (TechRepublic)

To date, watsonx has been shaped by more than 150 users across industries participating in the beta and tech preview programs, with more than 30 of them sharing early testimonials, according to IBM.

Jump to:

  • Watsonx comprises a trio of foundational AI products
  • Creating a data pipeline for generative AI
  • Watsonx offers a triptych of model sources
  • More IBM watsonx releases this year and next

Watsonx comprises a trio of foundational AI products

IBM said watsonx comprises a trio of generative AI model configurations:

  • The watsonx.ai studio for building and tuning foundation models, generative AI and machine learning.
  • The watsonx.data fit-for-purpose data store built on an open lakehouse architecture.
  • The coming watsonx.governance toolkit to enable AI workflows that are built with responsibility, transparency and explainability.

The company’s July 11 launch focused on watsonx.ai and watsonx.data; IBM will launch watsonx.governance later this year, said Tarun Chopra, IBM’s vice president of Product Management, Data and AI.

“On July 11, we [launched] the first two as SaaS services on IBM cloud, with watsonx.data also on AWS, on premises. These components work by themselves, but we are the only ones out there bringing them together as a platform,” he said.

Creating a data pipeline for generative AI

Chopra explained that watsonx.data is designed to help clients deal with volume, complexity, cost and governance challenges around data used in AI workloads, letting users access cloud and on-premises environments through a single point of entry.

He said that, while watsonx.data is a lakehouse repository, rather like Databricks or Snowflake, that can stand on its own as an open-source repository, it’s also a source of data, rather like a plugin for fine-tuning AI models.

SEE: Public or proprietary AI for business? Former Siri engineer Aaron Kalb weighs in (TechRepublic)

“You can, of course, connect that AI model to an S3 bucket or other cloud object storage where your data is located, or you can populate that data into a repository,” said Chopra. He added that if a user, in the latter case, has data associated with an AI model, they can automatically dump that data into the watsonx.data repository, which provides more functions and features than a typical cloud object storage.

The company said watsonx.data uses fit-for-purpose query engines like Ahana Presto and Apache Spark for wide workload coverage ranging from data exploration, data transformation, analytics and AI model training and tuning.

“If you are bringing Excel files, jpegs, other tables, web pages and so forth into the training set, you can house that in a watsonx.data instance and build in all of the lineage accordingly, because some of that you will have to provide your consumers who are asking where the data is coming from,” said Chopra.

Watsonx offers a triptych of model sources

Chopra explained that watsonx is unique in the AI space because it has the flexibility of hybrid, multicloud deployment and the ability to take advantage of open source (it’s running on Red Hat OpenShift) such as Hugging Face’s libraries, thousands of which are already available through watsonx.

“Because there is no single big hammer to solve all problems, we are providing a lot of flexibility in watsonx.ai, a workbench where you can have three sources of deployment, three libraries that can come into play: an IBM supplied model, open source models, customers’ own models,” said Chopra.

The company said the models support natural language processing tasks including question answering, content generation and summarization, text classification and extraction.

More IBM watsonx releases this year and next

IBM will offer graphic processing unit options on IBM Cloud. These GPU options are designed to support large enterprise workloads, according to the company, which said it will develop full stack high-performance, flexible, AI-optimized infrastructure for AI models later this year on IBM Cloud.

Also, the company said watsonx.data will use the watsonx.ai foundation models to give users the ability to use natural language to visualize and work with data.

The company said that over the next year, it will expand enterprise foundation model use cases beyond natural language processing and create 100 billion+ parameter models for targeted use cases. The governance capabilities will be aimed at helping organizations implement lifecycle governance, reducing risk and improving compliance, per IBM.

Subscribe to the Innovation Insider Newsletter

Catch up on the latest tech innovations that are changing the world, including IoT, 5G, the latest about phones, security, smart cities, AI, robotics, and more.

Delivered Tuesdays and Fridays Sign up today

Wipro Invests $1 Bn to Upskill Employees on Gen AI

Wipro has decided to invest $1 billion towards the enhancement of AI capabilities within the next three years. The IT giant has also unveiled Wipro ai360, a holistic and AI-centric innovation ecosystem aimed at incorporating AI into all internal platforms, tools, and solutions provided to clients.

Wipro is also set to introduce the GenAI Seed Accelerator program, which aims to prepare select generative AI-focused startups for enterprise-level operations through training. To achieve these objectives, Wipro plans to educate all of its 250,000 employees on the fundamentals of AI and responsible AI usage within the next year. Furthermore, the company will offer specialised and ongoing training programs tailored to employees in AI-related roles, just like what Infosys did recently.

Wipro’s AI Ambitions

The investment of $1 billion will support the advancement of Wipro’s AI, data, and analytics capabilities, as well as its research and development efforts. It will also contribute to the enhancement of FullStride Cloud and the development of new consulting capabilities, enabling clients to adapt to change and harness the benefits of AI.

The inclusion of Wipro’s innovation center, Lab45, within the Wipro ai360 ecosystem will play a crucial role in delivering clients with the necessary resources to enhance their utilization of artificial intelligence. Lab45 will offer expertise, training, scalability, and research opportunities, enabling the acceleration of AI adoption through collaborative innovation.

Indian IT’s Big Bets on AI

But this is not the first time that Indian IT has decided to create an AI suit.

Tech Mahindra also introduced its Generative AI Studio as part of its amplifAI0->∞ range of AI solutions and services to make AI accessible and adaptable for enterprises worldwide. The studio is designed to consolidate six key areas of content generation, namely: programming code, textual documents, images, videos, audio, and data. This initiative aims to streamline and centralise the process of creating various forms of content using AI.

Back when AI was not even a buzzword, Infosys had donated $1 billion to ChatGPT maker OpenAI along with the likes of Elon Musk, AWS and more. The IT major also has a new platform called Topaz that uses generative AI technologies to help businesses automate tasks, improve customer service, and enhance security. Topaz comes with 12,000 use cases and is designed to be easily integrated into existing business processes.

The post Wipro Invests $1 Bn to Upskill Employees on Gen AI appeared first on Analytics India Magazine.

Will Vedanta Partner With Micron?

After its recent joint venture with Foxconn to set up semiconductor and display production plants in Gujarat fell through, many wrote off the Indian conglomerate, and it was seen as a huge setback. However, the failed Vedanta-Foxconn venture is likely to have a positive outcome in disguise. Instead of a single project, there will now be two semiconductor projects, as Foxconn has decided to pursue its own venture.

During their 58th General meeting held today, Vedanta chairman Anil Agarwal declared, “We have lined up partners for our semiconductor venture. These Ventures will enable our youth to access affordable electronic devices, all of which will help them fulfil their aspirations.” The statement indicated that Vedanta is still in the game.

In another development, Micron, one of the world’s largest semiconductor companies announced plans to build a new assembly and test facility in Gujarat, entailing a total investment of USD 2.75 billion.

Micron could well be an alternative to Foxconn for Vedanta. Given the company’s history in the field, it could also be the ideal technological partner that the Indian firm has yet failed to come to agreements with. Micron’s expertise would not only further Vedanta’s cause but help shorten the span of actual output.

Micron is willing to put in $825 million while the rest will be financed by the government in two phases.

All is Not Lost for Vedanta

Moreover, Vedanta has already expressed full commitment to its semiconductor fabrication project and has formed alliances with other partners. Less than a week ago, Vedanta announced the acquisition of the semiconductor and display business from Twin Star Technologies Ltd, which is an entity of Volcan Investments Ltd—the ultimate holding company of Vedanta.

This will position it as the first company in India’s integrated semiconductor and display fabrication business. This is in line with the vision and stance of Vedanta chairman and founder Anil Agarwal on manufacturing in India and making the country a hub for semiconductors and their export. “Semiconductor is today’s oil; we are chip takers (India) and we will become chip makers and this will change our country,” Agarwal had said in an interview.

Agarwal has been at the forefront of India’s semiconductor push and has declared his complete commitment to the cause. “Vedanta is committed to making India self-reliant in electronics. This is the beginning of the creation of a Silicon Valley in India, a cutting-edge and world-class electronics ecosystem. My dream is for every Indian youth to have an affordable smartphone, laptop and an electric vehicle,” Agarwal said in a statement.

Vedanta had also previously said that Indian-made semiconductors and display glass will lead to affordable electronics for all Indians, and getting hands-on smartphones, laptops, televisions, and electric vehicles will become easier.

Catalyst for India Semiconductor Space

Even though the joint venture between Vedanta and Foxconn fell off, it will act as a catalyst to the overall electronics and semiconductor ecosystem. Recently, a slew of developments seem to have rejuvenated the conversation around the semiconductor industry in India, as the country looks to position itself as one of the leading players in the field.

Applied Materials—a leading semiconductor equipment maker announced its plans to invest $400 million in the next four years to build a collaborative engineering centre in Bengaluru. This centre will primarily concentrate on the advancement and implementation of technologies related to semiconductor manufacturing equipment.

Additionally, Lam Research has put forth a proposal to train 60,000 Indian engineers using its Semiverse Solution virtual fabrication platform. The objective of this initiative is to accelerate India’s goals in semiconductor education and workforce development. This could add to Vedanta’s momentum, as an educated workforce could help bring about homegrown innovation in the field powering advancements.

MoS for electronics and IT Rajeev Chandrasekhar also addressed the deal falling through, and said that the Taiwanese electronics manufacturer Foxconn’s decision to pull out of Vedanta joint venture has no impact on India’s semiconductor fabrication plant goal, “This decision of Foxconn to withdraw from its JV with Vedanta has no impact on India’s Semiconductor Fab goals. None,” Chandrasekhar tweeted.

“It’s not for govt to get into why or how two private companies choose to partner or choose not to, but in simple terms it means both companies can & will now pursue their strategies in India independently, and with appropriate technology partners in Semicon n Electronics,” Chandrasekhar tweeted.

Agarwal, during the company’s general meeting also praised the government and its initiative. He said, “The government led by PM Modi has proactively rolled out the progressive policies for the domestic manufacturing of semiconductors and display back in India this year, subject to government approval, your company will begin a foray into the semiconductor and display fab.”

He also added and has maintained that our country could stand to benefit from the US-China sanction war. “Our country stands to be the financial beneficiary of the China plus one strategy as we have a government that has a razor-sharp focus on creating an investment-friendly environment,” Agarwal said.

The post Will Vedanta Partner With Micron? appeared first on Analytics India Magazine.

‘The world is running out of developers’, says Salesforce exec

developers working

The recent spate of downsizings at technology companies won't last long because demand for technology capabilities, such as artificial intelligence (AI) and integration, is only getting more intense.

While AI might help companies deal with the demand for talent, especially when it comes to more mundane tasks, it's also important to recognize that the demand for skilled professionals will just keep growing.

Also: Meet the post-AI developer: More creative, more business-focused

That's the word from Brent Hayward, CEO and general manager for Salesforce's MuleSoft division. I caught up with Hayward at Salesforce's recent New York gathering and he provided insights on the evolving landscape for technology skills.

"There aren't enough developers in the world," he said. "We've already achieved exit velocity, where the pace of apps and the pace of technology is way outpacing the ability to bring on tech workers."

Hayward said the complexity of today's environments also contributes to the tightening squeeze: "It's almost impossible to be a full-stack developer anymore. There are too many technologies. And what courses do you take over a lifetime to know how all this stuff works?"

Recent employment data backs up Hayward's assertions. "Demand for talent, particularly people with technology skills, remains favorable, according to Bureau of Labor Statistics data," reported Deloitte analysts in the Wall Street Journal. "For employees in the information and technology sector, the demand is particularly strong. There were 41,000 more hires than layoffs in January, with 99,000 job openings."

Also: ChatGPT is the most sought out tech skill in the workforce, says learning platform

While innovative AI and automation technologies might mean businesses need more highly capable talent, these emerging technologies can also help to alleviate skills shortages. AI and automation can perform low-level and tedious integration tasks, elevating the tasks of developers and IT specialists to "reviewing and validating the mapping", said Hayward.

As part of this process of skills enrichment, he said AI is going to help level the playing field, providing advanced knowledge to all organizations: "We're going to find that in low-code, no-code, and even pro-code scenarios, we're giving everyone the best knowledge."

Also: I used ChatGPT to write the same routine in these ten obscure programming languages

At the same time, human talent will remain key to implementation. "I don't know if it will ever replace the most knowledgeable and deepest developer that you have," he said. "Boy, what an opportunity to level the playing field to bring the lowest common denominator up to the median. You'll have less errors, less issues, and be able to build for scale more effectively."

Hayward said the challenge for enterprises — and this is where technology talent will remain in high demand — is "the need to connect that backend, pro-developer integration and systems automaton patterns with more new modern, task-based workflow triggers. Those worlds are blending. While the world is running out of developers relative to scale, the world is growing very smart people that use applications and know the data model. As these worlds come together, we're seeing a very powerful paradigm shift."

Artificial Intelligence

Google’s latest art project recruits an animated bird — and AI — to play the cello

Google’s latest art project recruits an animated bird — and AI — to play the cello Kyle Wiggers 11 hours

Google might be laser focused on generative AI these days, like the majority of its Big Tech competitors. But that hasn’t stopped the search giant from patronizing pretty out-there art projects, true to its experimental roots.

Enter the latest creation from Google’s Art & Culture Lab, Viola the Bird, which uses AI to understand cello and violin compositions. The work has Viola — an animated bird that evokes a Sesame Street character — “perform” famous Beethoven, Vivaldi, Holst and Ravel stringed pieces as a user moves their mouse back and forth along a virtual cello in their web browser.

David Li, the artist behind Viola, worked with cellists and violinists as well as music arrangers to develop the AI, which he then applied to create an audio synthesis engine that generates the sounds of a cello or violin based on a user’s mouse movements.

Google Viola

Viola playing music, powered by AI.

“The result is an interactive music experiment that is both fun and educational,” Pamela Peter-Agbia, a program manager at Google Arts & Culture, writes in a blog post. “Viola the Bird is a great way for anyone to learn about string instruments and to explore their own creativity through music.”

Having spent some time with Viola, I can attest to the “fun” part — but wouldn’t go so far as to say the project is educational. It doesn’t provide sheet music or notes to accompany your “playing,” and there aren’t any guardrails to prevent someone from performing songs wildly off-tempo.

Questionable pedagogy aside, there’s enough to keep even casual classical fans entertained for a minute, like a recording feature and a freestyle mode that lets you jam via Viola, on the viola, until you’ve had your fill.

If you’re bored during the next lunch break — or have young kids to keep entertained — give Viola a try. It’s free. Just keep your expectations in check — unlike some of Google’s other AI-powered explorations in music of late, this bird won’t exactly blow your mind.

KDnuggets News, July 12: 5 Free Courses on ChatGPT • The Power of Chain-of-Thought Prompting

Features

  • 5 Free Courses on ChatGPT by Nisha Arya
  • Unraveling the Power of Chain-of-Thought Prompting in Large Language Models by Matthew Mayo
  • A Gentle Introduction to Support Vector Machines by Bala Priya C

From Our Partners

  • Building AI Products with OpenAI: A Free Course from CoRise by CoRise

This Week's Posts

  • Overcoming Imbalanced Data Challenges in Real-World Scenarios by Sergei Petrov
  • Introduction to Safetensors by Abid Ali Awan
  • Reinforcement Learning: Teaching Computers to Make Optimal Decisions by Bala Priya C
  • Segment Anything Model: Foundation Model for Image Segmentation by Youssef Rafaat
  • Exploring the Latest Trends in AI/DL: From Metaverse to Quantum Computing by Ihar Rubanau
  • Here Are the AI Tools I Use Along With My Skills to Make $10,000 Monthly — No BS by Nitin Sharma
  • Why is DuckDB Getting Popular? by Abid Ali Awan
  • Synthetic Data Platforms: Unlocking the Power of Generative AI for Structured Data by Himanshu Sharma
  • 5 Highest-paid Languages to Learn This Year by Abid Ali Awan
  • Data Science Project of Rotten Tomatoes Movie Rating Prediction: Second Approach by Nate Rosidi
  • A Guide to Data Science Project Management Methodologies by Nisha Arya

From Around The Web

  • SQL Crash Course for Data Scientists by Data Science Horizons
  • Understanding Scope and Variable Lifetime by Learn Computer Science with Python
  • Intriguing Properties of Quantization at Scale by Cohere
  • Is Humanity on the Brink of an AI Eclipse? by Cezary Gesikowski
  • Meet TextDeformer: An AI Framework For Text-guided 3D Mesh Deformation by Daniele Lorenzi

More On This Topic

  • Unraveling the Power of Chain-of-Thought Prompting in Large Language Models
  • 5 Free Courses on ChatGPT
  • KDnuggets News, April 6: 8 Free MIT Courses to Learn Data Science Online;…
  • KDnuggets News, December 14: 3 Free Machine Learning Courses for Beginners…
  • KDnuggets News, May 4: 9 Free Harvard Courses to Learn Data Science; 15…
  • KDnuggets News, July 27: The AIoT Revolution: How AI and IoT Are…