Top 9 Semiconductor GCCs in India

Semiconductor GCCs are on the rise in India. About 30% of the new GCCs set up in India during Q4 2023 were in the semiconductor space, signalling a growing interest in leveraging local talent for front-end design, performance testing, and post-silicon validation.

A closer look at the recent trends shows Bengaluru racing ahead in India’s semiconductor GCC landscape. The country’s own Silicon Valley hosts approximately 42% of all semiconductor GCC units and 61% of GCC talent in the country.

Hyderabad follows with 23% of the total units and 21% of the talent.

Here are the top semiconductor units in India.

Signature IP

Signature IP, a US-based company founded in 2021, is dedicated to advancing network-on-chip (NoC) technology. As one of the emerging semiconductor players in India, Signature IP established a Global Capability Center (GCC) in October 2023.

The company expanded its presence by inaugurating a new R&D centre in Bhubaneswar, with a focus on developing cutting-edge NoC solutions. The centre aims to foster collaboration with local universities, research institutions, and semiconductor companies to drive innovation and talent development in the NoC domain.

EdgeCortix

EdgeCortix, a Japan-based fabless semiconductor company, specialises in developing AI-specific processor architecture from the ground up. As one of the recent entrants in India’s semiconductor landscape, EdgeCortix has established a GCC in Hyderabad.

The company focuses on designing AI-specific processor architecture, offering a full-stack AI inference software development environment, run-time reconfigurable edge AI inference IP, and edge AI chips for boards and systems.

EdgeCortix’s flagship product, the Dynamic Neural Accelerator IP core, is scalable from 1024 to 32768 MACs and boasts a 16x improvement in inference/sec/watt compared to GPUs.

M31 Technology Corporation

M31 Technology Corporation is a Taiwan-based silicon IP provider that opened an R&D design centre in Bengaluru in October 2023. It focuses on IP development, IC design, and EDA, including memory compilers and standard cell library solutions.

The Bengaluru R&D centre is M31’s first international location for overseas R&D. The company has been awarded TSMC’s Best IP Partner Award for many consecutive years.

Micron Technology

Micron is investing $2.75 billion to build a semiconductor facility in Sanand, Gujarat. It will focus on the assembly and testing of DRAM and developing a 1 TB 232-Layer 3D TLC NAND Flash memory chip for diverse applications in domestic and international markets.

Construction is expected to begin this year, with Phase 1 (500,000 sq ft cleanroom) operational by late 2024. Phase 2, similar in scale to Phase 1, is slated to start in the latter half of the decade.

The project is expected to create up to 5,000 direct Micron jobs and 15,000 community jobs over the next several years. The company will also receive 50% fiscal support from the central government and 20% from the Gujarat government.

AMD

AMD recently inaugurated its largest global design centre in Bengaluru, and is planning to employ about 3,000 engineers.

The 500,000 sq ft AMD Technostar campus, with 60,000 sq ft of R&D labs, is part of AMD’s $400 million investment in India over five years. The centre will focus on high-performance CPUs, GPUs, SoCs and FPGAs.

Intel

Intel operates its GCC in India, with design and R&D centres playing a pivotal role in its global semiconductor operations. These centres are primarily involved in chip design and development activities.

Although Intel doesn’t currently manufacture chips in India, it has collaborated with prestigious academic institutions like IIT Bombay to foster semiconductor research and talent development.

For instance, Intel has established the Emsys Lab at IIT Bombay, concentrating on electronic and embedded system design, prototyping, evaluation, and hardware-accelerated simulation.

However, it remains open to the potential of future semiconductor manufacturing in India.

Texas Instruments

Texas Instruments (TI) was the first multinational company to establish a software design and R&D centre in India in 1985, located in Bengaluru. Over the past three decades, TI’s India centre has evolved into a critical R&D hub, with engineers contributing to almost every product developed globally by TI.

In 2002, TI India expanded its focus to include the design of 3G wireless chipsets and the development of Wireless LAN (WLAN) chipsets.

In 2005, TI India partnered with Indian manufacturer BPL to create the first cell phones. These were designed and manufactured in India, tailored to the specific needs of the Indian market and based on TI chipsets and reference designs.

In December 2010, TI established Kilby Labs in Bangalore, marking its first international expansion of the research program beyond the US. The labs focus on innovation in energy efficiency, bio-electronics, and life sciences, further solidifying TI’s commitment to technological advancement.

Nvidia

NVIDIA has established four engineering centres in India, including Bangalore and Delhi, employing a total of 4,000 engineers. This makes India the company’s second-largest talent pool after the United States.

It is actively collaborating with leading Indian companies such as Reliance and the Tata Group to establish advanced AI data centres and computing infrastructure within India.

The AI data centres will leverage NVIDIA’s next-generation GH200 Grace Hopper Superchip and DGX Cloud, an AI supercomputing service, to deliver exceptional performance and easy access to AI technology.

Additionally, Tata Communications and NVIDIA are jointly developing an AI cloud in India, utilising Tata’s global network to provide critical infrastructure for the next generation of computing and bring AI capabilities to enterprises.

Qualcomm

Qualcomm has made a significant investment of Rs 177.27 crore to enhance its presence in Chennai by establishing a new design centre facility. The new facility is expected to create employment opportunities for up to 1,600 professionals and will be instrumental in driving Qualcomm’s R&D efforts in 5G technology on a global scale.

With existing engineering centres in Bengaluru, Hyderabad, Chennai, and Delhi, Qualcomm boasts a workforce of 4,000 engineers in India, positioning the country as its second-largest talent pool after the US.

These Indian offices specialise in various domains such as wireless modem and multimedia software, DSP and embedded applications, and digital media networking solutions.

The post Top 9 Semiconductor GCCs in India appeared first on Analytics India Magazine.

India is a Goldmine for AI Talent 

A recent survey by the Graduate Management Admission Council (GMAC) shows a burgeoning demand for AI upskilling at workplaces in India and China. Unsurprisingly, this has led to a 40% increase in interest towards AI and data science in graduate management programme education (GME) aspirants worldwide.

Not to mention, a whopping 57% of MBA aspirants in India prefer a STEM-certified management programme over one that isn’t STEM-certified.

“Candidate demand for AI grew 38% year-over-year, with two-fifths now saying it is essential to their curricula. Global interest in STEM-certified GME programs grew 39% in five years – and to new heights in Asia, driven by demand in India and Greater China,” the report read.

In particular, the report stated that India has seen a 14% increase in STEM-adjacent management programmes, jumping from 43% in 2019 to 57% in 2023.

Along similar lines, a study done by UKG found that 72% of the surveyed Indian employees used AI-backed tools in their workplaces, asserting that it helped increase productivity.

“About 73% employees in India report that their organisations employ AI in the workplace, but only 47% employees understand how it’s used fully and 44% employees use it partially,” the report stated.

With the use of AI only growing, this number is likely to increase across sectors in the coming years. But is there enough training to go around?

Are institutions addressing the demand for AI skilling?

The recently released QS World University Rankings has shown an uptick in the quality of AI education in the country. In 2023, only about 20 universities factored into the QS data science and AI rankings worldwide – not one Indian institute made it.

Now, the QS rankings list as many as 72 universities that have ranked in offering some of the best data science and AI courses. Of these, four Indian institutions – IIT Bombay (30), IIT Kanpur (36), IIT Kharagpur (44), and IISc (45) – were in the top 50 list. IIT Guwahati was also included in the top 72 universities.

While these are India’s premier institutions, this could point towards a trend of most, if not all, higher education institutions embracing data science and AI as a vital part of their curricula.

Meanwhile, several management institutes, including IIMs have started to introduce courses with electives or minors in artificial intelligence. These include modules anywhere from introductory sessions on coding, to analytical applications within the workplace.

However, education on AI and data science is still lacking in the field of humanities, with most students having to take either bridge courses offered by tech institutions, or shift their learning trajectories in order to educate themselves on AI.

Similarly, inputs have come from the government for a need to upskill workers in AI, as India’s contribution to the global workforce is expected to grow exponentially in the next few decades.

“For new entrants in the job market, skilling is critical and for ones who are already working, reskilling and upskilling is critical. We are in a disruptive era. Whoever is able to see it and tries to understand this will remain relevant,” Union education and skill development minister Dharmendra Pradhan said.

He said this at the launch of an AI for All curriculum initiative for ITIs, wherein students in ITIs across India would get training on AI, and bridge the knowledge gap between education and employment.

This is just one of many similar government initiatives. Several have arisen over the last year, likely as a result of the ‘Making AI In India’ initiative last year as well as the greenlighting of over Rs 10,000 crore for the IndiaAI Mission in March.

However, there is a lack of a centralised AI skill development programme. While AI education is not where it needs to be in India currently, it seems that the tide is slowly turning on how AI and data science is perceived both in the Indian workplace and higher education spaces.

Finally, an open-minded approach for AI

With trends pointing towards a major shift in AI education among students, institutions and governments, a more comprehensive curriculum could be coming soon.

The gap in AI and data science training in the Indian higher education sector is finally being addressed as reports suggest that both students and institutions are taking the shift seriously.

Moreover, this shift is likely to be encouraged, especially in higher education institutions (HEIs), as the UGC instructed that courses be remapped to account for the usage of AI across all disciplines, as per the National Programme on Artificial Intelligence (NPAI) Skilling Framework report.

“It is important for students from varied academic streams to be skilled in AI as mass applicability of any technology allows more people to creatively use it for problem-solving,” said Manish Ratnakar Joshi, secretary, UGC.

Likewise, institutions are also coming out with bridging courses for teachers and other professionals to familiarise themselves with AI. Institutions like the International Institute of Information Technology (IIIT) have begun offering one-year certificate courses aimed at professionals on science and AI, as well as machine learning and deep learning.

The usage of AI within educational systems is not uncommon either. Even the Indian government has made use of AI systems for training in UPSC examinations, and improving learning outcomes in terms of assessments.

Following the news of ChatGPT potentially being used by students to sidestep coursework, the academia was in panic over how drastically AI could change the higher education sphere for the worse.

However, with time, it seems that the threat has died down, paving the way for a more open-minded approach on understanding the uses of AI, especially within classrooms and modern workplaces.

The post India is a Goldmine for AI Talent appeared first on Analytics India Magazine.

OpenAI Enters Japan, Releases GPT-4 Custom Model for Japanese 

OpenAI has announced its entry into the Asian market by opening its first office in Tokyo, Japan. The company is unveiling a GPT-4 custom optimised for the Japanese language. The company also plans to release the custom model more broadly in the API in the coming months.

“We’re excited to be in Japan, which has a rich history of people and technology coming together to do more,” said Sam Altman, CEO of OpenAI. “We believe AI will accelerate work by empowering people to be more creative and productive, while also delivering broad value to current and new industries that have yet to be imagined.”

To lead its initiatives in Japan, OpenAI has appointed Tadao Nagasaki as the President of OpenAI Japan. Mr. Nagasaki will oversee commercial and market engagement efforts, as well as build a local team focused on Global Affairs, Go-to-Market strategies, Communications, Operations, and other functions tailored to the Japanese market.

Moreover, OpenAI is providing early access to a GPT-4 custom model optimised for Japanese. This model offers improved performance in translating and summarizing Japanese text, operates up to 3 times faster than its predecessor, and is cost-effective for local businesses.

OpenAI’s presence in Japan brings it closer to leading businesses such as Daikin, Rakuten, and TOYOTA Connected, which are using ChatGPT Enterprise for various applications, including automating complex processes, data analysis, and optimizing internal reporting.

Furthermore, OpenAI’s partnership with local governments, such as Yokosuka City, showcases the potential of AI in improving public services and increasing productivity. Yokosuka City has reported significant productivity gains among its employees since integrating ChatGPT into its operations.

The post OpenAI Enters Japan, Releases GPT-4 Custom Model for Japanese appeared first on Analytics India Magazine.

Generative AI is coming for healthcare, and not everyone’s thrilled

Generative AI is coming for healthcare, and not everyone’s thrilled

Some experts don't think the tech is ready for prime time

Kyle Wiggers 10 hours

Generative AI, which can create and analyze images, text, audio, videos and more, is increasingly making its way into healthcare, pushed by both Big Tech firms and startups alike.

Google Cloud, Google’s cloud services and products division, is collaborating with Highmark Health, a Pittsburgh-based nonprofit healthcare company, on generative AI tools designed to personalize the patient intake experience. Amazon’s AWS division says it’s working with unnamed customers on a way to use generative AI to analyze medical databases for “social determinants of health.” And Microsoft Azure is helping to build a generative AI system for Providence, the not-for-profit healthcare network, to automatically triage messages to care providers sent from patients.

Prominent generative AI startups in healthcare include Ambience Healthcare, which is developing a generative AI app for clinicians; Nabla, an ambient AI assistant for practitioners; and Abridge, which creates analytics tools for medical documentation.

The broad enthusiasm for generative AI is reflected in the investments in generative AI efforts targeting healthcare. Collectively, generative AI in healthcare startups have raised tens of millions of dollars in venture capital to date, and the vast majority of health investors say that generative AI has significantly influenced their investment strategies.

But both professionals and patients are mixed as to whether healthcare-focused generative AI is ready for prime time.

Generative AI might not be what people want

In a recent Deloitte survey, only about half (53%) of U.S. consumers said that they thought generative AI could improve healthcare — for example, by making it more accessible or shortening appointment wait times. Fewer than half said they expected generative AI to make medical care more affordable.

Andrew Borkowski, chief AI officer at the VA Sunshine Healthcare Network, the U.S. Department of Veterans Affairs’ largest health system, doesn’t think that the cynicism is unwarranted. Borkowski warned that generative AI’s deployment could be premature due to its “significant” limitations — and the concerns around its efficacy.

“One of the key issues with generative AI is its inability to handle complex medical queries or emergencies,” he told TechCrunch. “Its finite knowledge base — that is, the absence of up-to-date clinical information — and lack of human expertise make it unsuitable for providing comprehensive medical advice or treatment recommendations.”

Several studies suggest there’s credence to those points.

In a paper in the journal JAMA Pediatrics, OpenAI’s generative AI chatbot, ChatGPT, which some healthcare organizations have piloted for limited use cases, was found to make errors diagnosing pediatric diseases 83% of the time. And in testing OpenAI’s GPT-4 as a diagnostic assistant, physicians at Beth Israel Deaconess Medical Center in Boston observed that the model ranked the wrong diagnosis as its top answer nearly two times out of three.

Today’s generative AI also struggles with medical administrative tasks that are part and parcel of clinicians’ daily workflows. On the MedAlign benchmark to evaluate how well generative AI can perform things like summarizing patient health records and searching across notes, GPT-4 failed in 35% of cases.

OpenAI and many other generative AI vendors warn against relying on their models for medical advice. But Borkowski and others say they could do more. “Relying solely on generative AI for healthcare could lead to misdiagnoses, inappropriate treatments or even life-threatening situations,” Borkowski said.

Jan Egger, who leads AI-guided therapies at the University of Duisburg-Essen’s Institute for AI in Medicine, which studies the applications of emerging technology for patient care, shares Borkowski’s concerns. He believes that the only safe way to use generative AI in healthcare currently is under the close, watchful eye of a physician.

“The results can be completely wrong, and it’s getting harder and harder to maintain awareness of this,” Egger said. “Sure, generative AI can be used, for example, for pre-writing discharge letters. But physicians have a responsibility to check it and make the final call.”

Generative AI can perpetuate stereotypes

One particularly harmful way generative AI in healthcare can get things wrong is by perpetuating stereotypes.

In a 2023 study out of Stanford Medicine, a team of researchers tested ChatGPT and other generative AI–powered chatbots on questions about kidney function, lung capacity and skin thickness. Not only were ChatGPT’s answers frequently wrong, the co-authors found, but also answers included several reinforced long-held untrue beliefs that there are biological differences between Black and white people — untruths that are known to have led medical providers to misdiagnose health problems.

The irony is, the patients most likely to be discriminated against by generative AI for healthcare are also those most likely to use it.

People who lack healthcare coverage — people of color, by and large, according to a KFF study — are more willing to try generative AI for things like finding a doctor or mental health support, the Deloitte survey showed. If the AI’s recommendations are marred by bias, it could exacerbate inequalities in treatment.

However, some experts argue that generative AI is improving in this regard.

In a Microsoft study published in late 2023, researchers said they achieved 90.2% accuracy on four challenging medical benchmarks using GPT-4. Vanilla GPT-4 couldn’t reach this score. But, the researchers say, through prompt engineering — designing prompts for GPT-4 to produce certain outputs — they were able to boost the model’s score by up to 16.2 percentage points. (Microsoft, it’s worth noting, is a major investor in OpenAI.)

Beyond chatbots

But asking a chatbot a question isn’t the only thing generative AI is good for. Some researchers say that medical imaging could benefit greatly from the power of generative AI.

In July, a group of scientists unveiled a system called complementarity-driven deferral to clinical workflow (CoDoC), in a study published in Nature. The system is designed to figure out when medical imaging specialists should rely on AI for diagnoses versus traditional techniques. CoDoC did better than specialists while reducing clinical workflows by 66%, according to the co-authors.

In November, a Chinese research team demoed Panda, an AI model used to detect potential pancreatic lesions in X-rays. A study showed Panda to be highly accurate in classifying these lesions, which are often detected too late for surgical intervention.

Indeed, Arun Thirunavukarasu, a clinical research fellow at the University of Oxford, said there’s “nothing unique” about generative AI precluding its deployment in healthcare settings.

“More mundane applications of generative AI technology are feasible in the short- and mid-term, and include text correction, automatic documentation of notes and letters and improved search features to optimize electronic patient records,” he said. “There’s no reason why generative AI technology — if effective — couldn’t be deployed in these sorts of roles immediately.”

“Rigorous science”

But while generative AI shows promise in specific, narrow areas of medicine, experts like Borkowski point to the technical and compliance roadblocks that must be overcome before generative AI can be useful — and trusted — as an all-around assistive healthcare tool.

“Significant privacy and security concerns surround using generative AI in healthcare,” Borkowski said. “The sensitive nature of medical data and the potential for misuse or unauthorized access pose severe risks to patient confidentiality and trust in the healthcare system. Furthermore, the regulatory and legal landscape surrounding the use of generative AI in healthcare is still evolving, with questions regarding liability, data protection and the practice of medicine by non-human entities still needing to be solved.”

Even Thirunavukarasu, bullish as he is about generative AI in healthcare, says that there needs to be “rigorous science” behind tools that are patient-facing.

“Particularly without direct clinician oversight, there should be pragmatic randomized control trials demonstrating clinical benefit to justify deployment of patient-facing generative AI,” he said. “Proper governance going forward is essential to capture any unanticipated harms following deployment at scale.”

Recently, the World Health Organization released guidelines that advocate for this type of science and human oversight of generative AI in healthcare as well as the introduction of auditing, transparency and impact assessments on this AI by independent third parties. The goal, the WHO spells out in its guidelines, would be to encourage participation from a diverse cohort of people in the development of generative AI for healthcare and an opportunity to voice concerns and provide input throughout the process.

“Until the concerns are adequately addressed and appropriate safeguards are put in place,” Borkowski said, “the widespread implementation of medical generative AI may be … potentially harmful to patients and the healthcare industry as a whole.”

How Neural Concept’s aerodynamic AI is shaping Formula 1

How Neural Concept’s aerodynamic AI is shaping Formula 1 Tim Stevens 7 hours

It’s a long way from pedal bikes to Formula 1. But that’s precisely the quantum leap that AI-based startup Neural Concept and its co-founder and CEO, Pierre Baqué, made in just six years.

In 2018, the company’s fledgling software helped develop the world’s most aerodynamic bicycle. Today, four out of 10 Formula 1 teams use an evolution of that same technology.

Along the way, Baqué’s company picked up contracts with aerospace suppliers like Airbus and Safran, earning a $9.1 million Series A raise in 2022. Now at 50 employees, Switzerland-based Neural Concept is working toward a Series B round while its software helps historic F1 teams like Williams Racing find their way back to the top of the world’s premiere form of motorsport.

However, where Formula 1 cars rely on 1,000-horsepower hybrid V6 engines, Baqué’s first practical application of the technology was human-powered.

Pedal power

In 2018, Baqué was studying at the École Polytechnique Fédérale de Lausanne’s Computer Vision Laboratory, working on applying machine learning techniques to three-dimensional problems.

“I was put in contact with this guy who was leading this team, designing the sixth or seventh generation of bike, and their goal was to break a world record of bicycle speed,” Baqué said. That guy was Guillaume DeFrance, and the team was IUT Annecy from the Université Savoie Mont Blanc. The cycling team had already gone through a half-dozen iterations of bike designs.

“Two days later, I came back to him with a shape that was almost looking like the current world record holder,” Baqué said. Impressed, the team asked for more iterations. The result was, per Baqué, “the most aerodynamic bike in the world at the moment.”

That’s a strong statement, but it’s backed up by multiple world records earned in 2019. We’re not talking about aerofoil-shaped downtubes or dimpled rims to reduce drag. This bike is fully shrouded, with the cyclist sweating away in a composite cocoon, completely sheltered from the wind.

The core technology is a product called Neural Concept Shape, or NCS. It’s a machine-learning-based system that makes aerodynamic suggestions and recommendations. It fits into the broad field of computational fluid dynamics (CFD), where highly trained engineers use advanced software suites to run three-dimensional aerodynamic simulations.

CFD is much faster than carving physical models and throwing them into wind tunnels. Still, it’s also hugely system-intensive and largely reliant on human beings making good decisions.

At its core, NCS helps engineers avoid potential aerodynamic pitfalls while pushing them into directions they might not have considered. In “co-pilot mode,” an engineer can upload an existing 3D shape, providing a starting point, for example.

NCS will then dig into its neural network to suggest improvements or modifications, possible paths in a 3D game of choose-your-own-adventure. The human engineer then picks the most promising suggestions and runs them through further testing and refinement, iterating their way to aerodynamic glory.

Not just “cheating the wind”

NCS is useful not just for racing but also in the automotive and aerospace industries. “The path to wide adoption in these kinds of companies is slow,” Baqué said of working within the somewhat conservative aerospace industry. “That’s how we started working more with the automotive industry, where the needs are a bit more burning, and they’ll be quick to change.”

Neural Concept secured contracts with several global suppliers, including Bosch and Mahle. Aerodynamics is increasingly key in the automotive world, with manufacturers searching for ever-more aerodynamic cars that deliver the greatest possible range from a given-sized battery pack.

But it’s not all about cheating the wind. NCS is also used in developing things like battery-cooling plates that, if made more efficient, can keep the battery at its optimal temperature without sapping too much energy in the process. “There are massive gains that can be made,” Baqué said, meaning yet more range.

While the ultimate proving ground for these technologies is always the road, the ultimate laboratory is Formula 1. A global motorsports phenomenon since 1950, F1 is currently experiencing an unprecedented wave of popularity.

The power of Netflix

The Netflix series “Formula 1: Drive to Survive” has brought the excitement of F1 to a whole new audience. While that series focuses on inter-team politics and drama, success on the track has much more to do with aerodynamics. That’s where Neural Concepts comes in.

Baqué started watching Formula 1 before Netflix was even a twinkle in Reed Hastings’ eye. “I always watched, since the time of David Coulthard and Michael Schumacher.”

Today, parts developed with assistance from his company’s software are running in this pinnacle of global motorsport. “It’s a great, great sense of accomplishment,” Baqué said. “When I started the company, I was seeing this as a landmark. Not only Formula 1, but just to have parts that were designed with the software on the road. And, yeah, every time that this happens, it’s a great, great feeling.”

Formula 1 is also an extremely secretive sport. Of the four teams that Neural Concept works with, only one was willing to be identified as a client, and even it was pretty tight-lipped about the whole process.

Williams Racing is one of the most storied teams in Formula 1. Founded back in 1977 by racing legend Frank Williams, his team was so dominant in the 1990s that it won five constructors’ world championships, including three in a row from 1992 to 1994.

But like in most sports, success is cyclical for Formula 1 teams, and right now, Williams is very much in a rebuilding phase. The team finished dead last in the 2022 season, rising only to seventh last year.

NCS is one of the tools helping Williams regain its competitive edge. “We use this technology in various ways, some of which improve our simulation, and other methods that we are working on will help deliver better results first-time in CFD,” said Williams Head of Aerodynamic Technology Hari Roberts.

Again, CFD simulations are time-intensive and costly, a situation compounded by Formula 1 regulations that limit a team’s ability to test. Physical time in the wind tunnel is heavily restricted, while each team also has a limited budget for computing time they can use to develop their cars.

Any tool that can help a team get its aerodynamic designs in shape quickly is a potential advantage, and NCS is very quick indeed. Baqué estimated that a full CFD simulation that typically takes an hour would take as little as 20 seconds through NCS.

And, since NCS isn’t running actual physics-based calculations but making AI-driven guesses based on its network of aerodynamic learnings, it’s largely exempt from F1’s draconian restrictions. “Anything we can do that allows us to extract more knowledge and therefore more performance from each CFD and wind tunnel run gives us a competitive advantage,” Roberts said.

But the teams still have to pay for it. Baqué said that NCS costs vary depending on the size of the team and type of access, but typically, it’s in the range of €100,000 to €1 million per year. Considering F1 teams also operate under a $135 million annual cost cap, that’s a substantial commitment.

Williams’ Roberts wasn’t willing to point to any specific parts or lap time improvements thanks to NCS software but said it has affected their car’s performance: “This technology is used as part of our toolset for developing the car aerodynamically. We, therefore, can’t attribute lap time directly to it, but we know that it helps our correlation and the speed at which we can investigate new aerodynamic conditions.”

Beyond aerodynamics

The ceaseless march of AI won’t stop there. There is talk of artificial agents on the pit wall calling the shots for race strategy and even car setups.

“It’s a fascinating time as the growth in the AI/ML industry is exponential,” Roberts said. “However, it’s also a real challenge that faces anyone involved in technology today. Which new tools do we devote time to exploring, developing, and adopting?”

That’s not the kind of intrigue that will captivate your average “Drive to Survive” viewer, but for many F1 fans, the race behind the race is the ultimate source of drama.

As for Neural Concept, the company is continuing to push deeper into the non-motorsport side of the automotive industry, working to develop more efficient electric motors, optimizing cabin heating and cooling, and even getting into crash testing.

Baqué said that the company’s software can help engineers optimize a car’s crashworthiness while stripping away unnecessary weight. But, for now, the company can only do crash simulations on individual components, not whole cars. “That is one of the few applications where we have been hitting the limits of performance,” he said.

Perhaps another application for the EU’s burgeoning AI supercomputing platforms?

Top 10 LMS Platforms for Enterprise AI Training and Development

For enterprises aiming to foster technological advancement within their teams, choosing the right Learning Management System (LMS) is crucial. These platforms offer specialized training in artificial intelligence (AI), providing the tools necessary for businesses to stay competitive in a rapidly evolving digital landscape.

Here’s a list of the top 10 LMS platforms that are perfect for enterprises looking to enhance their AI capabilities (in no specific order).

MachineHack for Business

MachineHack for Business excels in transforming teams into AI powerhouses with its bespoke Learning Management System. It provides AI skill assessments, hackathons, and targeted training programs, making it an invaluable resource for practical and competitive AI learning.

Explore MachineHack LMS for Business

ADaSci AI Academy

ADaSci AI Academy focuses on Generative AI and MLOps, offering hands-on training to bridge the gap between AI-savvy professionals and the rest. Their courses like “Generative AI Application with Google Vertex AI” and “Mastering Prompt Engineering for LLMs” ensure that teams not only learn AI but become masters in applying it effectively.

Start learning at ADaSci AI Academy

Coursera for Business

Coursera for Business offers a wide range of AI and machine learning courses developed in collaboration with leading universities and companies. Its enterprise plan includes comprehensive learning programs, employee progress tracking, and globally recognized certifications.

Discover Coursera for Business

Udacity

Udacity is renowned for its project-based Nanodegree programs in AI and machine learning, tailored for enterprises needing to close skills gaps with hands-on learning experiences that are reviewed by industry experts.

Check out Udacity for Enterprise

Pluralsight

With a focus on technical training, Pluralsight provides AI and machine learning courses alongside skill assessments and role-based learning paths that align team learning outcomes with business objectives.

Visit Pluralsight

EdX for Business

EdX for Business offers rigorous, university-level courses in AI from institutions like MIT and Harvard, tailored for companies that prioritize deep, academic learning.

Learn more about EdX for Business

DataCamp for Business

Specializing in data science and analytics, DataCamp offers an interactive learning experience in AI and machine learning, perfect for hands-on skill development through coding exercises and real-world datasets.

Explore DataCamp for Business

LinkedIn Learning

LinkedIn Learning features an extensive course library, including a wealth of AI and machine learning topics, making it suitable for professionals looking to upskill quickly within a corporate environment.

Visit LinkedIn Learning

Simplilearn

Simplilearn focuses on applied learning with professional certification training in AI and machine learning, delivered through live, instructor-led sessions and labs.

Check out Simplilearn

A Cloud Guru

Ideal for teams utilizing cloud platforms for AI applications, A Cloud Guru specializes in cloud computing, offering hands-on labs and real-world scenarios that enhance learning in cloud-based AI technologies.

Visit A Cloud Guru

These LMS platforms are designed to cater to the varied needs of large organizations, ensuring that teams not only learn AI technologies but are also able to effectively implement them in real-world applications.

Comparison

LMS Platform Specialization Notable Features Target Audience
MachineHack for Business AI Talent Development AI skill assessments, hackathons, bespoke LMS Enterprises focusing on AI
ADaSci AI Academy Generative AI, MLOps Hands-on courses on Generative AI, MLOps, Prompt Engineering AI professionals, data scientists
Coursera for Business Broad educational offerings Courses from top universities, tracking, certifications Enterprises seeking academic partnerships
Udacity Tech and AI skills Project-based Nanodegree programs, industry expert reviews Enterprises closing skills gaps
Pluralsight Tech skills Technical courses, skill assessments, learning paths Tech-focused enterprises
EdX for Business Academic rigor University-level courses from MIT, Harvard, etc. Enterprises valuing deep learning
DataCamp for Business Data science and AI Interactive courses, coding exercises, real-world datasets Teams needing hands-on data science training
LinkedIn Learning Broad professional development Extensive course library, quick upskilling Professionals in various industries
Simplilearn Certification training Live sessions, professional certification, hands-on labs Enterprises looking for certified training
A Cloud Guru Cloud computing Cloud-based AI training, hands-on labs, real-world scenarios Teams using cloud for AI applications

MachineHack for Business and ADaSci AI Academy distinguish themselves in the crowded LMS market through their singular focus on artificial intelligence. MachineHack for Business leverages its expertise by offering practical AI skill assessments and hackathons, making it highly relevant for enterprises seeking to directly enhance their teams’ AI capabilities.

Similarly, ADaSci AI Academy’s focus on Generative AI and MLOps through hands-on training ensures that learners not only understand AI concepts but are fully prepared to implement these advanced technologies in real-world scenarios. This focused approach ensures that both platforms deliver highly specialized and effective training, making them the top choices for organizations committed to leading in AI innovation.

The post Top 10 LMS Platforms for Enterprise AI Training and Development appeared first on Analytics India Magazine.

Vana plans to let users rent out their Reddit data to train AI

Vana plans to let users rent out their Reddit data to train AI

A startup, Vana, says it wants users to get paid for training data

Kyle Wiggers 8 hours

In the generative AI boom, data is the new oil. So why shouldn’t you be able to sell your own?

From big tech firms to startups, AI makers are licensing e-books, images, videos, audio and more from data brokers, all in the pursuit of training up more capable (and more legally defensible) AI-powered products. Shutterstock has deals with Meta, Google, Amazon and Apple to supply millions of images for model training, while OpenAI has signed agreements with several news organizations to train its models on news archives.

In many cases, the individual creators and owners of that data haven’t seen a dime of the cash changing hands. A startup called Vana wants to change that.

Anna Kazlauskas and Art Abal, who met in a class at the MIT Media Lab focused on building tech for emerging markets, co-founded Vana in 2021. Prior to Vana, Kazlauskas studied computer science and economics at MIT, eventually leaving to launch a fintech automation startup, Iambiq, out of Y Combinator. Abal, a corporate lawyer by training and education, was an associate at The Cadmus Group, a Boston-based consulting firm, before heading up impact sourcing at data annotation company Appen.

With Vana, Kazlauskas and Abal set out to build a platform that lets users “pool” their data — including chats, speech recordings and photos — into data sets that can then be used for generative AI model training. They also want to create more personalized experiences — for instance, daily motivational voicemail based on your wellness goals, or an art-generating app that understands your style preferences — by fine-tuning public models on that data.

“Vana’s infrastructure in effect creates a user-owned data treasury,” Kazlauskas told TechCrunch. “It does this by allowing users to aggregate their personal data in a non-custodial way … Vana allows users to own AI models and use their data across AI applications.”

Here’s how Vana pitches its platform and API to developers:

The Vana API connects a user’s cross-platform personal data … to allow you to personalize your application. Your app gains instant access to a user’s personalized AI model or underlying data, simplifying onboarding and eliminating compute cost concerns … We think users should be able to bring their personal data from walled gardens, like Instagram, Facebook and Google, to your application, so you can create amazing personalized experience from the very first time a user interacts with your consumer AI application.

Creating an account with Vana is fairly simple. After confirming your email, you can attach data to a digital avatar (like selfies, a description of yourself and voice recordings) and explore apps built using Vana’s platform and data sets. The app selection ranges from ChatGPT-style chatbots and interactive storybooks to a Hinge profile generator.

Vana Reddit DAO

Image Credits: Vana

Now why, you might ask — in this age of increased data privacy awareness and ransomware attacks — would someone ever volunteer their personal info to an anonymous startup, much less a venture-backed one? (Vana has raised $20 million to date from Paradigm, Polychain Capital and other backers.) Can any profit-driven company really be trusted not to abuse or mishandle any monetizable data it gets its hands on?

Vana Reddit DAO

Image Credits: Vana

In response to that question, Kazlauskas stressed that the whole point of Vana is for users to “reclaim control over their data,” noting that Vana users have the option to self-host their data rather than store it on Vana’s servers and control how their data’s shared with apps and developers. She also argued that, because Vana makes money by charging users a monthly subscription (starting at $3.99) and levying a “data transaction” fee on devs (e.g. for transferring data sets for AI model training), the company is disincentivized to exploit users and the troves of personal data they bring with them.

“We want to create models owned and governed users who all contribute their data,” Kazlauskas said, “and allow users to bring their data and models with them to any application.”

Now, while Vana isn’t selling users’ data to companies for generative AI model training (or so it claims), it wants to allow users to do this themselves if they choose — starting with their Reddit posts.

This month, Vana launched what it’s calling the Reddit Data DAO (Digital Autonomous Organization), a program that pools multiple users’ Reddit data (including their karma and post history) and lets them to decide together how that combined data is used. After joining with a Reddit account, submitting a request to Reddit for their data and uploading that data to the DAO, users gain the right to vote alongside other members of the DAO on decisions like licensing the combined data to generative AI companies for a shared profit.

We have crunched the numbers and r/datadao is now largest data DAO in history: Phase 1 welcomed 141,000 reddit users with 21,000 full data uploads.

— r/datadao (@rdatadao) April 11, 2024

It’s an answer of sorts to Reddit’s recent moves to commercialize data on its platform.

Reddit previously didn’t gate access to posts and communities for generative AI training purposes. But it reversed course late last year, ahead of its IPO. Since the policy change, Reddit has raked in over $203 million in licensing fees from companies including Google.

“The broad idea [with the DAO is] to free user data from the major platforms that seek to hoard and monetize it,” Kazlauskas said. “This is a first and is part of our push to help people pool their data into user-owned data sets for training AI models.”

Unsurprisingly, Reddit — which isn’t working with Vana in any official capacity — isn’t pleased about the DAO.

Reddit banned Vana’s subreddit dedicated to discussion about the DAO. And a Reddit spokesperson accused Vana of “exploiting” its data export system, which is designed to comply with data privacy regulations like the GDPR and California Consumer Privacy Act.

“Our data arrangements allow us to put guardrails on such entities, even on public information,” the spokesperson told TechCrunch. “Reddit does not share non-public, personal data with commercial enterprises, and when Redditors request an export of their data from us, they receive non-public personal data back from us in accordance with applicable laws. Direct partnerships between Reddit and vetted organizations, with clear terms and accountability, matters, and these partnerships and agreements prevent misuse and abuse of people’s data.”

But does Reddit have any real reason to be concerned?

Kazlauskas envisions the DAO growing to the point where it impacts the amount Reddit can charge customers for its data. That’s a long ways off, assuming it ever happens; the DAO has just over 141,000 members, a tiny fraction of Reddit’s 73-million-strong user base. And some of those members could be bots or duplicate accounts.

Then there’s the matter of how to fairly distribute payments that the DAO might receive from data buyers.

Currently, the DAO awards “tokens” — cryptocurrency — to users corresponding to their Reddit karma. But karma might not be the best measure of quality contributions to the data set — particularly in smaller Reddit communities with fewer opportunities to earn it.

Kazlauskas floats the idea that members of the DAO could choose to share their cross-platform and demographic data, making the DAO potentially more valuable and incentivizing sign-ups. But that would also require users to place even more trust in Vana to treat their sensitive data responsibly.

Personally, I don’t see Vana’s DAO reaching critical mass. The roadblocks standing in the way are far too many. I do think, however, that it won’t be the last grassroots attempt to assert control over the data increasingly being used to train generative AI models.

Startups like Spawning are working on ways to allow creators to impose rules guiding how their data is used for training while vendors like Getty Images, Shutterstock and Adobe continue to experiment with compensation schemes. But no one’s cracked the code yet. Can it even be cracked? Given the cutthroat nature of the generative AI industry, it’s certainly a tall order. But perhaps someone will find a way — or policymakers will force one.

The Current State of AI in Marketing 2024

The use of AI in marketing has changed how businesses communicate with clients. It provides personalized client experiences and can automate repetitive tasks. According to a McKinsey study, around 75% of the value AI use cases could deliver falls across four areas, and marketing is one of these.

The Artificial Intelligence (AI) in marketing size is expected to reach $145.42 billion by 2032.

Despite AI's potential to deliver substantial results in marketing, marketers are hesitant to fully adopt this technology. Hence, if you're a marketer not using AI, you're potentially missing out on the benefits of a highly transformative technology.

Let’s go over the current state of AI adoption in marketing and how marketers can benefit from it.

AI Adoption Among Marketers

The winds of change are sweeping through marketing departments worldwide. A recent GetResponse survey revealed that 45% of respondents already use AI tools in their marketing strategies. They stated that they employed AI to automate processes, personalize marketing, and gain deeper insights into the needs of their target audience.

Furthermore, another noteworthy finding from the survey indicates that approximately one-third of respondents (32%) either do not use AI presently (26%) or don't even know what it is (6%). This underscores the necessity for enhanced awareness of AI marketing’s advantages. Marketers can truly attain success in their endeavors only by comprehending its potential and how it can boost their marketing efforts.

AI also offers marketers an exciting opportunity. Early adopters who properly leverage AI's power can gain a competitive advantage in the marketplace.

The Potential of AI in Marketing

AI is changing the marketing industry by creating various new opportunities. Here are a few instances of how AI in marketing is altering how businesses communicate with their customers:

1. Data Analytics

AI enables marketers to monitor customer data and uncover hidden patterns and trends. This allows well-informed decisions by gaining a deeper grasp of customers' behavior and preferences.

2. Content Generation

AI can generate personalized content, from product descriptions to social media posts, at scale. This frees up marketers to focus on creative strategy and ensures content resonates with specific audience segments.

3. Personalization

AI analyzes individual client data and behavior to allow hyper-personalized marketing experiences. Through dynamic content recommendations, personalized email campaigns, and tailored product suggestions, AI-driven personalization fosters deeper customer engagement and loyalty, driving conversion rates and customer satisfaction.

4. Audience Segmentation & Targeting

AI algorithms can segment audiences, identify high-value customers, and enable targeted marketing campaigns. This maximizes campaign effectiveness and ensures resources are directed towards the most receptive audiences.

5. Programmatic Advertising

AI automates the process of buying and selling ad space, optimizing bids in real time for maximum reach and ROI. This helps marketers to gain valuable time and efficiency while delivering impactful ad campaigns.

6. Search Engine Optimization (SEO)

AI can analyze search trends and user behavior to inform SEO strategies. This helps marketers identify relevant keywords, optimize content for search engines, and improve their organic search ranking.

AI Concerns that Hinder AI Adoption in Marketing

While AI offers numerous benefits, legitimate concerns exist that can affect its wider adoption in marketing. Here are some key roadblocks:

  • Data Security: Marketers handle a vast amount of sensitive customer data. Concerns about AI security, data breaches, and misuse of information with AI tools can be a significant deterrent.
  • Vague AI Regulations: The legal domains surrounding AI are still evolving. Unclear regulations can create uncertainty and hesitation around data privacy and consumer rights within AI-powered marketing strategies, concerning 30% of respondents.
  • Lack of AI Strategy: Many businesses lack a clear roadmap for AI implementation. The technology's potential can remain unrealized without a well-defined strategy aligning AI with overall marketing goals.
  • Implementation Cost/Pricey Technology: Advanced AI tools can come with a hefty price tag, posing a challenge for businesses with limited budgets. As GetResponse shows, 35% of respondents are concerned about AI costs. Additionally, the cost of implementation and integration with existing infrastructure can be a barrier.
  • Skills Gap/Upskilling: Using AI effectively requires a new skillset within marketing teams. Moreover, upskilling current employees or recruiting individuals with AI and data analysis expertise might require additional investment.

Strategies to Overcome Challenges Related to AI Adoption

AI's power in marketing is undeniable. But, dealing with roadblocks to adoption is key. Here are key strategies to navigate these challenges:

Education and Training

Equip your marketing team with the knowledge and skills to work effectively with AI. Invest in training programs or workshops to demystify their misconceptions and address concerns regarding AI, building their confidence in using it.

Collaboration with AI Experts and Consultants

Team up with AI experts for guidance. These professionals can help you integrate AI into your existing processes. Hence, with their expertise under your sleeves, you can create a strategic roadmap that will help optimize the effectiveness of your AI plan.

Pilot Projects and Testing Phases

Start small. Implement pilot projects with specific goals to test the effectiveness of AI tools in your marketing efforts. This method lets you experiment with AI safely and highlights its benefits for your organization.

Transparency and Communication

Openly communicate the benefits and limitations of AI in marketing to all stakeholders. Addressing privacy concerns and promoting a sense of trust in AI implementation is important for its successful adoption.

Continuous Monitoring and Evaluation

Monitor AI performance regularly, analyzing results and adapting strategies as needed. This will help you understand whether you are on the right track and ensure that AI tools deliver optimal results and meet your evolving marketing goals.

Staying Informed on Evolving AI Regulations

Stay proactive and updated on emerging AI regulations, such as the EU’s AI Act, to ensure compliance and mitigate legal risks. Building a culture of responsible AI use can strengthen consumer trust and promote long-term success.

Ready to use the power of AI in your marketing strategy? Visit Unite.ai, a leading resource for AI and marketing news and insights today. Explore the latest advancements in AI technology and discover how it can transform your marketing efforts.

TCS Records $900 Million AI and GenAI Pipeline This Quarter 

TCS

India’s leading IT company, Tata Consultancy Services (TCS), is currently developing AI and generative AI projects worth $900 million, said CEO and MD K Krithivasan during the Q4 2024 earnings call. This figure nearly matches the revenue achieved by rival Accenture, which totaled $1.1 billion in the first two quarters.

“Following the launch of our AI.Cloud unit, we have observed a notable increase in market interest. To date this year, we have secured over 200 AI engagements,” said Krithivasan.

“During this quarter, we saw significant demand for Cloud, data platforms and Gen AI across industry segments. Clients are seeking to harness these technologies to reimagine customer experience, simplify their technology estate and transform their operating model,” said the company in a statement.

TCS recently announced that it has trained over 350,000 employees in AI/ML, including GenAI. Moreover, TCS is also a launch partner for the newly announced AWS Generative AI Competency.

TCS has successfully applied AI to transform various aspects of its customers’ value chains. Notably, they’ve utilised GenAI to enhance the airline customer experience during flight disruptions, enabling natural conversations and offering alternative routing options.

Furthermore, TCS has leveraged GenAI capabilities to simplify and streamline the contract review process, improving clause identification and validation, version control, and ultimately enhancing contract closure agility and risk accuracy.

In the cybersecurity domain, TCS experienced robust growth, particularly in Identity and Access Management (IAM), Governance Risk Compliance, and Network Security. The integration of AI and GenAI into security offerings has garnered significant interest and traction from clients across diverse sectors.

TCS’ AI and GenAI Security solutions have seen notable adoption, reinforcing its capabilities in providing comprehensive and integrated cybersecurity services.

The post TCS Records $900 Million AI and GenAI Pipeline This Quarter appeared first on Analytics India Magazine.

Father of Computational Theory Wins 2023 Turing Award

Renowned computer scientist Avi Wigderson has won the prestigious 2023 A.M. Turing Award from the Association for Computing Machinery (ACM) for “reshaping our understanding of the role of randomness in computation and for decades of intellectual leadership in theoretical computer science.” His contributions to the theory of computation profoundly impact areas such as randomness in computation, complexity theory, and cryptography.

The 2021 Abel Prize winner loves to solve challenges. He appreciates the complexity of problems, stating, “It’s good to have hard problems,” which he finds not only challenging but also beneficial for developing tools like pseudorandom number generators.

Wigderson began his academic journey at the Technion–Israel Institute of Technology, majoring in computer science. He then went on to study at Princeton University, where he earned an MS in Engineering in 1981, an MA in 1982, and a Ph.D. in 1983. He focused his research on computational complexity theory. After his Ph.D., Wigderson held several visiting positions at prestigious institutions such as IBM Research and the Mathematical Sciences Research Institute.

Since 1999, Wigderson has been with the Institute for Advanced Study in Princeton as the Herbert H. Maass Professor in the School of Mathematics, contributing extensively to the fields of theoretical computer science and mathematics.

Apart from this, the Israeli born scientist has several accolades to his name including Donald E. Knuth Prize (2019), Gödel Prize (2009), Levi L. Conant Prize (2008) and Rolf Nevanlinna Prize (1994).

Shaping Theoretical Computer Science

Wigderson’s research is notable for its breadth and depth, addressing some of the most pressing questions in computer science. His work on randomness in computation has been particularly influential. He explored how randomness can enhance the performance of algorithms, making significant advances in the efficiency of computational methods.

Another major area of his research is complexity theory. Wigderson has contributed extensively to our understanding of which computational problems are tractable and which are not, helping to classify problems based on the computational power needed to solve them.Understanding the complexity of algorithms allows researchers to develop more efficient methods for training machine learning models, especially when dealing with large datasets.

Wigderson, who is also a mathematics professor at the Institute for Advanced Study in Princeton, New Jersey, has also made groundbreaking contributions to cryptographic protocols and zero-knowledge proofs. Zero-knowledge proofs are methods by which one party can prove the validity of a statement to another without sharing any additional information. He describes zero-knowledge proofs as “the antithesis of mathematics,” where the goal is to prove the correctness of a statement without revealing any other information. This paradoxical nature of zero-knowledge proofs has broad applications, including in secure communications and blockchain technology​.

Humans & Machines

In 2022, during an interview at the Heidelberg Laureate Forum, Wigderson offered rich insights into the evolving relationship between humanity and technology, discussing how foundational theories in computation directly impact the growth in AI and machine learning ML.

“In mathematics and other fields, we often encounter the question of how we differ from machines. I like to challenge this perspective by suggesting that we, too, are machines in a sense,” he added, challenging the conventional views by suggesting that humans, too, can be considered as machines—complex systems governed by the laws of physics, chemistry, and biology.

Speaking about the legacy of Alan Turing, Wigderson added that “He (Turing) was ahead of his time, pondering questions about machine consciousness and the potential for computers to mimic human behavior in various ways”. The Turing Award, named after Alan Turing, a foundational figure in modern computing who also developed the Turing Test, honors major contributions to the field of computing.

For Wigderson, this isn’t just a litmus test for machine intelligence but a broader philosophical inquiry into what constitutes ‘intelligence’.

Reflecting on Turing’s impact, Wigderson admired how Turing had foreseen key issues in AI, like machine consciousness that are central to contemporary AI research.

After being awarded the Turing Award in 2023, Wigderson’s thoughts from this interview resonate even more powerfully. His discussions not only align with Turing’s initial inquiries but also highlight a continuous thread of intellectual curiosity and exploration in computational theory.

This prestigious recognition in 2023 celebrated Wigderson’s lifelong contributions to theoretical computer science, acknowledging how his work has shaped our understanding of both the capabilities and the future potential of AI and ML.

The Turing award, often referred to as the “Nobel Prize of computing” has previously been awarded to John McCarthy (1971) and Marvin Minsky (1969) for coining the term “artificial intelligence” and laying the foundation for it. More recently in 2018, Yoshua Bengio, Geoffrey Hinton, and Yann LeCun were honored for their contributions to deep learning, improving AI significantly.

The post Father of Computational Theory Wins 2023 Turing Award appeared first on Analytics India Magazine.