T-Hub Incubated MATH is Launching AI Career Finder to Create AI Jobs 

ai jobs india

Over the years, T-Hub has grown to become the largest innovation hub in the world. Since its inception in 2015, T-Hub, led by the Telangana government, has nurtured over 1,600 startups.

Just last month, the Machine Learning and Artificial Intelligence Technology Hub (MATH) was incubated at T-Hub which aims to foster AI innovation by bridging the gap between startups, corporates, academia, investors, and governments.

In the midst of concerns about job displacement caused by AI, MATH aims to generate employment opportunities in the field of AI within the country. The initial objective is to create 500 AI-related jobs by the end of 2025.

MATH CEO Rahul Paith believes AI will likely do redundant jobs that involve repetitive and routine tasks, such as data entry, administrative work, and basic customer service roles.

“However, it’s essential to recognize that the rise of AI will also create a demand for new jobs, particularly those requiring human skills such as creativity, critical thinking, problem-solving, and emotional intelligence.

“Roles that involve working alongside AI systems, such as AI trainers, data scientists, machine learning engineers, and AI ethicists, will become increasingly prevalent,” he told AIM.

Creating 500 AI jobs

One of the primary goals of MATH besides nurturing AI startups is to create AI and AI-related jobs in the country. “Our vision is to generate over 500 AI-related jobs by 2025 and support more than 150 startups annually,” he said.

To enable this, MATH aims to foster a supportive environment for AI-first startups, providing them with resources, mentorship, and access to networks crucial for growth.

“In the first year, we are aiming to onboard over a hundred startups. These startups are AI-first, deeply involved in either building around AI, utilising AI, or contributing to the AI ecosystem,” he said.

Besides undertaking initiatives like talent development programmes, industry-academia collaborations, and targeted investment in AI research and development, MATH is launching its own job portal.

Called AI Career Finder, the platform is dedicated to nurturing and empowering the next generation of AI/ML talent. Paith said the platform is designed to serve as a central hub for connecting startups seeking top AI/ML professionals with candidates searching for exciting opportunities in the field.

By leveraging AI Career Finder, MATH aims to streamline talent acquisition and placement, thereby strengthening its efforts to catalyse job creation in the AI sector.

“Additionally, MATH has identified key sectors such as healthcare and clean tech as prime areas for AI integration and growth. By facilitating collaborations and investments in these sectors, MATH aims to amplify job opportunities within AI-related fields.”

AI Programmes

To foster AI innovation, MATH has also launched a few programmes designed to foster AI innovation in the startup ecosystem. “MATH Nuage is our pioneering initiative, through which we provide comprehensive support and guidance to aspiring entrepreneurs navigating the complex landscape of AI innovation,” Paith said.

Its key components include Virtual Partner Support, which connects startups with strategic partners for insights and resources.

“Similarly, Mentor Desk Support offers guidance from seasoned professionals, Funding Desk Support facilitates securing investment, and Access to Data Lake enables startups to access a vast collection of data for AI model development,” Paith explained.

Another flagship programme launched by MATH is called the AI Scaleup Programme, which aims to accelerate AI innovation and entrepreneurship.

“This initiative is geared towards supporting startups at the scale-up stage, providing them with the resources, mentorship, and networking opportunities needed to propel their growth and success in the AI market.”

A mini data centre

MATH has also set up a mini data centre with GPU capabilities to help AI startups with AI training and inferencing. “In comparison to constructing a complete data centre, the mini data centre (called MINI DC) offers powerful computing abilities at a much lower price.”

The data centre helps startups meet their high-performance computing (HPC) needs and is loaded with NVIDIA A100 GPUs. “The mini data centre’s infrastructure ensures efficient deployment of trained models, enabling startups to bring their AI solutions to market quickly,” Paith said.

Closing the funding gap

Along with T-Hub, MATH also assists startups in securing funding through various channels, including venture capital firms, angel investors, and government grants. “We provide support in preparing funding proposals, pitching to investors, and negotiating investment terms,” Paith said.

However, he believes investors, incubators, and government agencies must collaborate to close the funding gap and foster a risk-tolerant climate. “Investors must acknowledge the extended value proposition of deeptech startups and their capability to make a significant social and economic difference.

“Moreover, investing in specialised education and training programmes is essential to develop a strong deeptech talent pool.
“Through creating a joint ecosystem, we can enable Indian deeptech startups to flourish and emerge as global pioneers in innovation,” he said.

The post T-Hub Incubated MATH is Launching AI Career Finder to Create AI Jobs appeared first on Analytics India Magazine.

Meta Launches Android of AR/VR Devices 

Tech giant Meta announced that it is opening up the operating system that powers its Meta Quest devices to third-party hardware makers, aiming to expand the ecosystem for developers and offer more choices to consumers.

This new hardware ecosystem will run on Meta Horizon OS, the mixed reality operating system that powers Meta Quest headsets. This move will create a more open and diverse ecosystem for virtual reality (VR) devices, mirroring the approach Google took with its Android mobile operating system.

Meta Horizon OS combines core technologies for mixed reality experiences, focusing on features that enhance social interactions on the platform. This operating system is the result of Meta’s work on independent headsets. It incorporates inside-out tracking, self-tracked controllers, and advanced interaction systems such as hand, eye, face, and body tracking.

This operating system also integrates features for blending digital and physical worlds, such as high-resolution Passthrough, Scene Understanding, and Spatial Anchors.

Meta’s decision comes as the company seeks to expand the reach and adoption of its VR platform. By allowing other manufacturers to create devices running on Meta Horizon OS, Meta hopes to accelerate innovation and consumer choice in the VR market.

Meta will partner with external hardware companies, including Lenovo, Microsoft and Asus, to build virtual reality headsets using the company’s Horizon operating system. This strategy could potentially lead to a wider variety of VR devices at different price points and with varying features, catering to a broader range of consumers.

The company is also developing a new spatial app framework for mobile developers in developing mixed reality experiences. Developers will be able to use the tools they’re already familiar with to bring their mobile apps to Meta Horizon OS or to create entirely new mixed reality apps.

By reducing barriers between the Meta Horizon Store and App Lab, Meta is enhancing its app ecosystem. This change allows developers to release software on the platform more easily.

The post Meta Launches Android of AR/VR Devices appeared first on Analytics India Magazine.

Microsoft Introduces Phi-3, LLM That Runs on the Phone

Everyone is Now Officially a Developer, Thanks to Microsoft

While speaking to AIM, Harkirat Behl, one of the creators of the Phi model, said that his team was working on the next version of Phi-2 and making it more capable. “Phi-1.5 started showing great coding capabilities, Phi-2 was code with common sense abilities, and the next one would be even more capable,” he said.

Microsoft has now unveiled Phi-3-Mini, a 3.8 billion parameter language model trained on an extensive dataset of 3.3 trillion tokens. Despite its compact size, Phi-3-Mini boasts performance levels that rival larger models such as Mixtral 8x7B and GPT-3.5.

“One of the things that makes Phi-2 better than Meta’s Llama 2 7B and other models is that its 2.7 billion parameter size is very well suited for fitting on a phone,” said Behl. Phi-3 makes it even better now.

For instance, Phi-3-Mini achieves 69% on the MMLU benchmark and 8.38 on the MT-bench, making it suitable for deployment on mobile phones.

Phi-3-Mini, being highly capable, can run locally on a cell phone. Its small size allows it to be quantized to 4 bits, occupying approximately 1.8GB of memory. Microsoft tested the quantized model by deploying Phi-3-Mini on an iPhone 14 with an A16 Bionic chip, running natively on the device and fully offline, achieving more than 12 tokens per second.

The innovation behind Phi-3-Mini lies in its training dataset, an expanded version of the one used for its predecessor, Phi-2. This dataset comprises heavily filtered web data and synthetic data. The model has also been optimised for robustness, safety, and chat format.

Microsoft has also introduced Phi-3-Small and Phi-3-Medium models, both significantly more capable than Phi-3-Mini. Phi-3-Small, with 7 billion parameters, utilises the tiktoken tokenizer for improved multilingual tokenization. It boasts a vocabulary size of 100,352 and a default context length of 8K.

The model follows the standard decoder architecture of a 7B model class, featuring 32 layers and a hidden size of 4096. To minimise KV cache footprint, Phi-3-Small employs a grouped-query attention, with four queries sharing one key.

Additionally, it utilises alternative layers of dense attention and a novel blocksparse attention to optimise KV cache savings while maintaining long context retrieval performance. An additional 10% multilingual data was used for training this model.

However, Phi-3-Mini has its limitations. While it demonstrates a similar level of language understanding and reasoning ability as much larger models, it is fundamentally limited by its size for certain tasks. For example, it lacks the capacity to store extensive “factual knowledge,” resulting in lower performance on tasks such as TriviaQA.

Microsoft believes such weaknesses can be addressed by augmenting the model with a search engine. Additionally, the model’s language capabilities are mostly restricted to English, highlighting the need to explore multilingual capabilities for Small Language Models.

The post Microsoft Introduces Phi-3, LLM That Runs on the Phone appeared first on Analytics India Magazine.

How predictive analytics improves payment fraud detection

how predictive analytics improves payment fraud detection feature

Payment fraud is a significant issue for banks, customers, government agencies and others. However, advanced predictive analytics tools can reduce or eliminate it.

Minimizing false alarms

Many people have had the embarrassing experience of trying to pay for something and having the transaction flagged. It can be understandably frustrating whether this happens at an e-commerce site or when standing in front of a cashier. However, the goal is to reduce those false alarms while stopping unauthorized transactions. Some organizations use predictive analytics to help.

In one case, Synchrony — a financial and payment solutions provider — deployed artificial intelligence and machine learning to make its fraud detection approach more accurate. Executives say it can correctly identify genuine fraud more than 90% of the time, greatly reducing the friction that false alarms cause.

The company’s system continually adapts to the changing landscape. Part of its functionality involves automated decision-making and the detection of abnormal behaviors.

Many financial institution leaders are understandably tight-lipped about the precise functionality of proprietary solutions. Still, they understand how false alarms can cause customers to change their minds about purchases and feel less eager to buy things with methods other than cash.

Identifying patterns consistent with fraud

Just as it’s important for predictive analytics tools not to flag authentic transactions, they must also recognize telltale signs of fraud. For example, a significant deviation in user behavior could indicate a potential account takeover, resulting in an event requiring prompt investigation.

Mastercard uses tools to identify and predict these signs. They stopped more than $35 billion in fraud-related losses over three years. These solutions work by assigning customers a consumer fraud risk score based on their current or past activities.

Representatives also say the technology has helped them understand emerging or persistent fraud types. It’s then easier to warn customers about the newest scams, such as by sending targeted emails, text messages or brochures to educate them about the warning signs.

Some Mastercard employees are also working with other financial companies, leading coordinated fraud-related data-sharing efforts. Then, relevant parties throughout the industry can share knowledge and learn from each other.

Upholding industry benchmarks

People’s banking transaction methods have evolved. For example, 31% of adults made payments with voice-activated applications in 2022. These new methods mean financial professionals need new and diverse ways to spot payment fraud. Predictive analytics tools can indicate whether brands perform well enough in these efforts.

Visa offers a solution used by thousands of financial institutions as a fraud preventive. It enables those parties to work with merchants, applying analytics to authorize payments and deal with disputes. The tool also reflects industry benchmarks, showing whether participants get results within the average for particular case types.

One entity using Visa’s tool experienced a 30% reduction in fraud, along with a 10% boost in transaction approvals. That breakdown shows how well-applied predictive analytics tools and other technologies can result in better outcomes for everyone involved.

Additionally, when a merchant knows particular metrics are significantly higher or lower than industry benchmarks, they can recognize what’s going well so far and know where room for improvement exists.

Increasing customer protections

Anyone who has experienced the aftermath of a successful payment fraud attempt knows how quickly things can go wrong. A criminal could drain someone’s bank account in minutes. However, predictive analytics tools reduce the chances of such instances, keeping customers safer.

These solutions rely on vast amounts of data, comparing what’s presently happening with someone’s account with historical information about it. Any evidence of a substantial deviation could result in the account getting instantly frozen until a human investigates the matter.

Sometimes, people get automated prompts that ask whether they made certain unusual transactions. When those queries come via text messages or email, people can respond in seconds, unlocking their accounts and the associated cards without ever needing to speak to representatives.

However, once an account features genuinely unauthorized transactions, the speed with which predictive analytics products can make it inoperable is a fantastic protective measure. Fraudsters operate quickly, so thwarting their attempts as soon as possible becomes essential.

Predictive analytics products also operate when account holders cannot take action themselves. For example, if someone tries to commit fraud while the account user is sleeping, they probably won’t respond to text messages and emails in time. In such cases, the analytics tool can determine that the safest response is to lock the affected party’s account until further investigations can occur.

Payment fraud prevention requires proactiveness

Payment fraud can become a massive issue for everyone affected. Speed is essential when addressing and curbing it. Fortunately, predictive analytics can process data faster than humans, allowing automation to provide additional screening and protective measures that stop fraudsters’ actions and keep everyone safer.

However, any financial institution decision-makers considering using it should choose their solutions carefully and understand both the pros and cons of individual products. Although no commercial option is wholly without downsides, leaders often determine that predictive analytics-based products are essential parts of operating effectively in a world where fraud is increasingly rampant and frequently damaging.

AI Sustainability: How Microsoft, Google Cloud, IBM & Dell are Working on Reducing AI’s Climate Harms

Many companies aim to measure sustainability-related effects with AI such as weather and energy use, but fewer talk about mitigating AI’s water- and power-hungry nature in the first place. Running generative AI sustainably could reduce some of the impact of climate change and look good to investors who want to contribute positively to the Earth.

This article will examine the environmental impact of generative AI workloads and processes and how some tech giants are addressing those issues. We spoke to Dell, Google Cloud, IBM and Microsoft.

How much energy does generative AI consume, and what is the possible impact of that usage?

How much energy generative AI consumes depends on factors including physical location, the size of the model, the intensity of the training and more. Excessive energy use can contribute to drought, animal habitat loss and climate change.

A team of researchers from Microsoft, Hugging Face, the Allen Institute for AI and several universities proposed a standard in 2022. Using it, they found that training a small language transformer model on 8 NVIDIA V100 GPUs for 36 hours used 37.3 kWh. How much carbon emissions this translates to depends a lot on the region in which the training is performed, but on average, training the language model emits about as much carbon dioxide as using one gallon of gas. Training just a fraction of a theoretical large model — a 6 billion parameter language model — would emit about as much carbon dioxide as powering a home does for a year.

Another study found AI technology could grow to consume 29.3 terawatt-hours per year — the same amount of electricity used by the entire country of Ireland.

A conversation of about 10 to 50 responses with GPT-3 consumes a half-liter of fresh water, according to Shaolei Ren, an associate professor of electrical and computer engineering at UC Riverside, speaking to Yale Environment 360.

Barron’s reported SpaceX and Tesla mogul Elon Musk suggested during the Bosch ConnectedWorld conference in February 2024 that generative AI chips could lead to an electricity shortage.

Generative AI’s energy use depends on the data center

The amount of energy consumed or emissions created depends a lot on the location of the data center, the time of year and time of day.

“Training AI models can be energy-intensive, but energy and resource consumption depend on the type of AI workload, what technology is used to run those workloads, age of the data centers and other factors,” said Alyson Freeman, customer innovation lead, sustainability and ESG at Dell.

Nate Suda, senior director analyst at Gartner, pointed out in an email to TechRepublic that it’s important to differentiate between data centers’ energy sources, data centers’ power usage effectiveness and embedded emissions in large language models hardware.

A data center hosting a LLM may be relatively energy efficient compared to an organization that creates a LLM from scratch in their own data center, since hyperscalers have “material investments in low-carbon electricity, and highly efficient data centers,” said Suda.

On the other hand, massive data centers getting increasingly efficient can kick off the Jevons effect, in which decreasing the amount of resources needed for one technology increases demand and therefore resource use overall.

How are tech giants addressing AI sustainability in terms of electricity use?

Many tech giants have sustainability goals, but fewer are specific to generative AI and electricity use. For Microsoft, one goal is to power all data centers and facilities with 100% additional new renewable energy generation. Plus, Microsoft emphasizes power purchase agreements with renewable power projects. In a power purchase agreement, the customer negotiates a preset price for energy over the next five to twenty years, providing a steady revenue stream for the utility and a fixed price for the customer.

“We’re also working on solutions that enable datacenters to provide energy capacity back to the grid to contribute to local energy supply during times of high demand,” said Sean James, director of datacenter research at Microsoft, in an email to TechRepublic.

“Don’t use a sledgehammer to crack open a nut”

IBM is addressing sustainable electricity use around generative AI through “recycling” AI models; this is a technique developed with MIT in which smaller models “grow” instead of a larger model having to be trained from scratch.

“There are definitely ways for organizations to reap the benefits of AI while minimizing energy use,” said Christina Shim, global head of IBM sustainability software, in an email to TechRepublic. “Model choice is hugely important. Using foundation models vs. training new models from scratch helps ‘amortize’ that energy-intensive training across a long lifetime of use. Using a small model trained on the right data is more energy efficient and can achieve the same results or better. Don’t use a sledgehammer to crack open a nut.”

Ways to reduce energy use of generative AI in data centers

One way to reduce energy use of generative AI is to make sure the data centers running it use less; this may involve novel heating and cooling methods, or other methods, which include:

  • Renewable energy, such as electricity from sustainable sources like wind, solar or geothermal.
  • Switching from diesel backup generators to battery-powered generators.
  • Efficient heating, cooling and software architecture to minimize data centers’ emissions or electricity use. Efficient cooling techniques include water cooling, adiabatic (air pressure) systems or novel refrigerants.
  • Commitments to net zero carbon emissions or carbon neutrality, which sometimes include carbon offsets.

Benjamin Lee, professor of electrical and systems engineering and computer and information science at the University of Pennsylvania, pointed out to TechRepublic in an email interview that running AI workloads in a data center creates greenhouse gas emissions in two ways.

  • Embodied carbon costs, or emissions associated with the manufacturing and fabricating of AI chips, are relatively small in data centers, Lee said.
  • Operational carbon costs, or the emissions from supplying the chips with electricity while running processes, are larger and increasing.

Energy efficiency or sustainability?

“Energy efficiency does not necessarily lead to sustainability,” Lee said. “The industry is rapidly building datacenter capacity and deploying AI chips. Those chips, no matter how efficient, will increase AI’s electricity usage and carbon footprint.”

Neither sustainability efforts like energy offsets nor renewable energy installations are likely to grow fast enough to keep up with datacenter capacity, Lee found.

“If you think about running a highly efficient form of accelerated compute with our own in-house GPUs, we leverage liquid cooling for those GPUs that allows them to run faster, but also in a much more energy efficient and as a result a more cost effective way,” said Mark Lohmeyer, vice president and general manager of compute and AI/ML Infrastructure at Google Cloud, in an interview with TechRepublic at NVIDIA GTC in March.

Google Cloud approaches power sustainability from the angle of using software to manage up-time.

“What you don’t want to have is a bunch of GPUs or any type of compute deployed using power but not actively producing, you know, the outcomes that we’re looking for,” he said. “And so driving high levels of utilization of the infrastructure is also key to sustainability and energy efficiency.”

Lee agreed with this strategy: “Because Google runs so much computation on its chips, the average embodied carbon cost per AI task is small,” he told TechRepublic in an email.

Right-sizing AI workloads

Freeman noted Dell sees the importance of right-sizing AI workloads as well, plus using energy-efficient infrastructure in data centers.

“With the rapidly increasing popularity of AI and its reliance on higher processing speeds, more pressure will be put on the energy load required to run data centers,” Freeman wrote to TechRepublic. “Poor utilization of IT assets is the single biggest cause of energy waste in the data center, and with energy costs typically accounting for 40-60% of data center’s operating costs, reducing total power consumption will likely be something at the top of customers’ minds.”

She encouraged organizations to use energy-efficient hardware configurations, optimized thermals and cooling, green energy sources and responsible retirement of old or obsolete systems.

When planning around energy use, Shim said IBM considers how long data has to travel, space utilization, energy-efficient IT and datacenter infrastructure, and open source sustainability innovations.

How are tech giants addressing AI sustainability in terms of water use?

Water use has been a concern for large corporations for decades. This concern isn’t specific to generative AI, since the problems overall — habitat loss, water loss and increased global warming — are the same no matter what a data center is being used for. However, generative AI could accelerate those threats.

The need for more efficient water use intersects with increased generative AI use in data center operations and cooling. Microsoft doesn’t separate out generative AI processes in its environmental reports, but the company does show that its total water consumption jumped from 4,196,461 cubic meters in 2020 to 6,399,415 cubic meters in 2022.

“Water use is something that we have to be mindful of for all computing, not just AI,” said Shim. “Like with energy use, there are ways businesses can be more efficient. For example, a data center could have a blue roof that collects and stores rainwater. It could recirculate and reuse water. It could use more efficient cooling systems.”

Shim said IBM is working on water sustainability through some upcoming projects. Ongoing modernization of the venerable IBM research data center in Hursley, England will include an underground reservoir to help with cooling and may go off-grid for some periods of time.

Microsoft has contracted water replenishment projects: recycling water, using reclaimed water and investing in technologies such as air-to-water generation and adiabatic cooling.

“We take a holistic approach to water reduction across our business, from design to efficiency, looking for immediate opportunities through operational usage and, in the longer term, through design innovation to reduce, recycle and repurpose water,” said James.

Microsoft addresses water use in five ways, James said:

  • Reducing water use intensity.
  • Replenishing more water than the organization consumes.
  • Increasing access to water and sanitation services for people across the globe.
  • Driving innovation to scale water solutions.
  • Advocating for effective water policy.

Organizations can recycle water used in data centers, or invest in clean water initiatives elsewhere, such as Google’s Bay View office’s effort to preserve wetlands.

How do tech giants disclose their environmental impact?

Organizations interested in large tech companies’ environmental impact can find many sustainability reports publicly:

  • Apple 2023 environmental report
  • Amazon 2022 sustainability report
  • Dell 2023 ESG report
  • Google 2023 sustainability report
  • IBM 2024 impact report
  • Microsoft 2022 sustainability report
  • NVIDIA 2023 corporate responsibility report
  • Salesforce 2023 stakeholder impact report

Some AI-specific callouts in these reports are:

  • IBM used AI to capture and analyze IBM’s energy data, creating a more thorough picture of energy consumption
  • NVIDIA focuses on the social impact of AI instead of the environmental impact in their report, committing to “models that comply with privacy laws, provide transparency about the model’s design and limitations, perform safely and as intended, and with unwanted bias reduced to the extent possible.”

Potential gaps in environmental impact reports

Many large organizations include carbon offsets as part of their efforts to reach carbon neutrality. Carbon offsets can be controversial. Some people argue that claiming credits for preventing environmental damage elsewhere in the world results in inaccuracies and does little to preserve local natural places or places already in harm’s way.

Tech giants are aware of the potential impacts of resource shortages, but may also fall into the trap of “greenwashing,” or focusing on positive efforts while obscuring larger negative impacts. Greenwashing can happen accidentally if companies do not have sufficient data on their current environmental impact compared to their climate targets.

When to not use generative AI

Deciding not to use generative AI would technically reduce energy consumption by your organization, just as declining to open a new facility might, but doing so isn’t always practical in the business world.

“It is vital for organizations to measure, track, understand and reduce the carbon emissions they generate,” said Suda. “For most organizations making significant investments in genAI, this ‘carbon accounting’ is too large for one person and a spreadsheet. They need a team and technology investments, both in carbon accounting software, and in the data infrastructure to ensure that an organization’s carbon data is maximally used for proactive decision making.”

Apple, NVIDIA and OpenAI declined to comment for this article.

Implementing AI in K-12 education

Roundtable Discussion with Rebecca Bultsma and Ahmad Jawad

Implementing AI in K-12 education

In the latest episode of the AI Think Tank Podcast, we ventured into the rapidly evolving intersection of artificial intelligence and K-12 education. I was fortunate to host a discussion that not only explored the transformative potentials of AI in educational settings but also tackled the complexities and ethical concerns that come with such technological integration. Joining me were friends Rebecca Bultsma and Ahmad Jawad, two notable experts who brought a wealth of knowledge and insight to our conversation.

Rebecca Bultsma, joining us from Alberta, Canada, is an international AI trainer, presenter, and enthusiast who has dedicated her career to enhancing AI literacy. Her work primarily focuses on educating professionals across North America about AI, its applications, and the careful considerations necessary when implementing it in educational environments. Over the years, Rebecca has become a vocal advocate for the ethical use of AI, particularly in education, contributing to various panels and committees to promote responsible practices.

Alongside Rebecca was Ahmad Jawad, CEO of Doceo AI, and ed-tech company Intellimedia. Also based in Alberta, Ahmad has led his company in developing educational solutions that have been widely adopted by numerous school districts. His efforts have established him as a strategic thinker in the realm of K-12 education, where he continues to explore how AI can enhance educational outcomes while ensuring the technology is used responsibly and effectively.

Our discussion opened with a reflection on the potential of AI to revolutionize the educational landscape. I shared my excitement about AI’s capacity to personalize learning experiences and streamline administrative operations, making education more efficient and accessible. However, we quickly acknowledged the challenges that accompany these advancements, particularly issues surrounding data privacy, the digital divide, and the possibility of biases embedded within AI algorithms.

Implementing AI in K-12 education

Rebecca emphasized the importance of foundational AI literacy for educators and administrators. She passionately spoke about her experiences training educational leaders, highlighting the frequent questions about starting points for AI integration and the development of comprehensive policies that align with ethical standards. Her stories from various training sessions illustrated the widespread curiosity and concern about how best to implement AI in schools.

Ahmad provided insights into the practical applications of AI within educational institutions. He discussed how his company’s solutions have enabled personalized learning paths and real-time analytics, helping educators better understand and support their students’ learning journeys. However, Ahmad was quick to point out that the successful implementation of AI requires more than just technological adoption; it demands a thorough understanding of educational goals and ethical considerations to ensure that AI tools are used to their fullest potential without compromising student welfare.

As we went deeper into the ethical implications of AI in education, both Rebecca and Ahmad underscored the need for robust frameworks to manage AI usage in schools. We discussed the importance of transparency in how student data is handled and the steps necessary to prevent AI from exacerbating existing educational inequalities. The conversation often circled back to the idea that while AI can offer significant benefits, it must be deployed thoughtfully and with a steadfast commitment to equity and fairness.

In our conclusion, we considered the future of AI in K-12 education with a mix of optimism and caution. It was clear from our discussion that AI holds tremendous promise for transforming education by making learning more engaging and tailored to individual needs. Yet, both Rebecca and Ahmad reminded us that this technological journey must be navigated carefully, with ongoing attention to the ethical dimensions of AI use.

Subscribe to the AI Think Tank Podcast on YouTube.
Would you like to join the show as a live attendee and interact with guests? Contact Us

ByteDance Uses GPT-4V to Create a Multimodal LLM, Groma, for Enhanced Image Region Understanding

ByteDance to Launch Platform to Build Custom Chatbots

Researchers from ByteDance and the University of Hong Kong recently developed Groma, a multimodal Large Language Model (MLLM) that excels in region-level image tasks by utilising a localised visual tokenisation approach and leveraging GPT-4V.

Groma excels not only in comprehensive image understanding but is also adept at region-level tasks such as region captioning and visual grounding. Instead of depending on LLMs or external modules for localization, Groma leverages the spatial understanding capability of the visual tokenizer. This ‘perceive-then-understand’ design also resembles human vision process.

In this localized visual tokenization mechanism, an image is segmented into regions of interest, which are then converted into region tokens. Groma encodes the image into both global image tokens and local region tokens. By integrating region tokens into user instructions and model responses, Groma understands user-specified region inputs and ground its textual output to images.

Source: github.io

Furthermore, to improve Groma’s ability to engage in visually grounded conversations, the team curated a dataset of 30k visually grounded conversations for instruction finetuning. This marks the first grounded chat dataset constructed with both visual and textual prompts, leveraging the powerful GPT-4V for data generation.

In contrast to other MLLMs that depend on the language model or external module for localization, Groma consistently shows superior performances in standard referring and grounding benchmarks. This highlights the benefits of embedding localization into image tokenization.

The post ByteDance Uses GPT-4V to Create a Multimodal LLM, Groma, for Enhanced Image Region Understanding appeared first on Analytics India Magazine.

Japan is the Next Big Hub for Indian Tech Talent

Suddenly, Japan seems to have become the sweet spot for tech investments. OpenAI recently opened its new office in the country and released a custom GPT-4 model for Japanese. Microsoft announced it would invest $2.9 billionover the next two years to increase its hyperscale cloud computing and AI infrastructure in Japan.

A few days ago, Google invested $1 billion to boost digital connectivity between the US and Japan by constructing two new subsea cables, Proa and Taihei. These cables will also link to the Pacific Islands, enhancing reliability and reducing latency.

Oracle, too, has announced an investment of $8 billionover the next 10 years to meet the demand for cloud computing and AI infrastructure in Japan. Earlier this year, Amazon Web Services (AWS) announced its plans to invest $15.24 billion in Japan by 2027 to expand cloud computing infrastructure that serves as a backbone for AI services.

Last year, CEO Jensen Huang said his company would do its best to supply its GPUs to Japan amid extremely high market demand.

And while the tech giants make a beeline for Japan, there emerges a clear winner –theIndian tech professionals. Given the demand, Japan is now actively hiring Indian professionals, particularly in the technology and engineering sectors. This trend is driven by the country’s ageing population and labour shortages in various industries.

Can India Fulfill Japan’s Needs?

“I’m glad to see more AI in Japan,” said Llion Jones, the co-founder of Tokyo-based Sakana AI and one of the authors of Google’s 2017 research paper ‘Attention Is All You Need’.

Further, he told AIM that they’ve interviewed a couple of Indian folks. “If you go to sakana.ai/careers, we received only about 33% of applicants from here [Japan], and the rest are from overseas. About 13% of them are from India,” he added.

According to a recent report, Japan aims to expand direct investment and attract skilled tech workers from Southeast Asia and India. Moreover, the country recently announced its plans to introduce new visas to make it easier for Indian and Southeast Asian tech workers to move to Japan.

Japanese Ambassador to India, Hiroshi F Suzuki, extended a warm invitation to Indian students and young people, encouraging them to consider visiting and studying in Japan.

In a conversation with Hindi-speaking YouTuber Mayo San, Ambassador Suzuki said that it is very easy to get a student visa to Japan. He said all you have to do is present a student ID card to get a visa. “I’m encouraging young Indians to go to Japan to get skill training and job opportunities,” he said.

According to the Japanese Ministry of Justice’s 2023 statistics, 46,262 Indian nationals live in Japan, mainly in Tokyo. They primarily work in the information technology (IT) and creative sectors.

Working in Japan— a Humbling Experience

AIM spoke with a few Indian employees working in Japan, who say that their experience has been positive.

“By 2050, about 60% of the Japanese population will be above 50. If this happens, who’s going to generate revenue for them? Moreover, one AI can do the job of two or more people. The country is trying to address these problems,” shared an Indian employee who currently works at Rakuten.

Further, the employee said learning basic Japanese is recommended for Indian employees. At Rakuten, however, the communication happens in English as its official language.

The Rakuten employee also said that Japanese people are very humble and helpful. If they don’t understand English, they make the effort to translate the documents into Japanese.

Further, the employee added that if you are fluent in L3 Japanese, you can become a manager in Japan as well, but he noted that most Indians in Japan work in the tech sector.

Interestingly, 10,000 of Rakuten’s 50,000 employees are Indians and 2,000 work in Japan.

Rakuten recently partnered with OpenAI to create solutions to address the unique needs and challenges of telecommunications operators when planning, building and managing mobile networks.

OpenAI backed-Speak is also pretty popular in Japan.

Another employee who works for an IT firm said that in Japan, there is no hire and fire culture, and the Japanese Ministry looks after the workers. “Every day that you work in an organisation, your timesheet is sent to the Japanese Ministry at the end of the day,” he said.

The Land of Rising AI Indeed

There is no stopping for Japan. The country is focusing on sovereign AI. Recently, Fujitsu Limited and Oracle partnered to offer sovereign cloud and AI capabilities, meeting the digital sovereignty needs of Japanese businesses and the public sector.

Moreover, unlike the EU and US, copyright laws in Japan regarding generative AI are quite lenient, which leads to Japan being called a ‘machine learning paradise’.

According to reports from a committee meeting, Japan’s minister of education, culture, sports, science, and technology, Keiko Nagoaka, indicated that AI companies in Japan can use “whatever they want” for AI training without restrictions based on profit motives, the nature of the activity, or the source of the content, including materials from illegal sites.

With such laws, Japan is definitely an attractive place for AI startups to train their models. Vinod Khosla said, “Japan and India are set to be the next AI hotspots”.

The post Japan is the Next Big Hub for Indian Tech Talent appeared first on Analytics India Magazine.

Mega Networks to Establish AI Server Factory in Maharashtra

Data center yotta

Mega Networks, a leading computer hardware company, has announced plans to set up a factory in Maharashtra by the third quarter of the fiscal year 2024-2025 to manufacture AI servers locally, according to CEO Amrish Pipada.

The company intends to invest Rs 350-400 crore in FY25, with initial funding from internal accruals and debt of Rs 100-120 crore. The remaining funds will be raised through long-term debt and private equity.

Mega Networks is one of two Indian companies selected for the production-linked incentive (PLI) scheme 2.0 for IT hardware. This scheme provides a 5% incentive on net incremental sales of goods manufactured in India.

Pipada stated that the company has identified potential orders of close to Rs 1,000 crore for the AI servers, with confirmed order bookings of Rs 150-200 crore currently.

The servers, launched in February, are optimised by Intel’s Habana Gaudi 1 and 2 series and 4th and 5th generation scalable processors and can be used for generative AI, high-performance computing, and in data centres.

Mega Networks plans to increase its employee count from 110 to around 400 over the coming year to support growth.

The company recorded revenue of Rs 300 crore in FY24, growing 30% year-on-year, and expects AI server manufacturing to significantly boost revenue, with a target of Rs 1,000 crore in two to three years.

The growing AI ecosystem in India presents a significant opportunity for server manufacturers. However, supply chain constraints and the need for government support to develop the domestic ecosystem of parts and design capabilities remain challenges

The post Mega Networks to Establish AI Server Factory in Maharashtra appeared first on Analytics India Magazine.

AI Likely to Outsmart Us Before We Decode the Brain, says Geoffrey Hinton 

Godfather of AI Geoffrey Hinton recently said that we still won’t understand brains at the point where AI is smarter than humans.

Earlier this month, Hinton was presented with the UCD Ulysses Medal for his contributions to the field of deep learning. During his lecture at the college, Hinton spoke about how it wasn’t necessary to emulate the human brain with artificial intelligence. In fact, he said that he stopped believing the neural nets for digital computers were similar to how the human brain operated.

“I stopped believing that if you make them more like brains, they’ll get better. It’s quite possible that we still won’t understand the brain at the point when these things are smarter than us,” he said.

Further, speaking on the differences between AI and the human brain, he said, “I started believing neural nets using backpropagation on digital computers are already somewhat different from us.”

He said that while AI could hold knowledge similarly to a brain, the ability to make copies and share efficiently between models made them more efficient.

Hinton has been vocal about the divergence of AI from human brains, stating that AI is much better than a human brain at learning. However, to do so requires significantly more power than a brain.

“Biological computation is great for evolving because it requires very little energy, but my conclusion is that digital computation is just better,” he had said during a Romanes Lecture he delivered in February. He had also mentioned that this was likely something that was as far as 20 years away.

Following Hinton’s statement, several seem to think that AI, or even ASI, could help explain brains to us. Including Perplexity’s Aravind Srinivas who said,

AI might be the only way humans can understand their own brains better with.

— Aravind Srinivas (@AravSrinivas) April 21, 2024

Some also believed that this could help provide breakthroughs in terms of helping identify and diagnose mental illnesses, as well as find cures for them.

However, as Hinton had stated, AI might become smarter than humans before that happened.

Further, Hinton also joked that he had spent his entire life trying to understand how brains worked using artificial neural networks, which had been a “failure” leading to his current contributions to the field.

The post AI Likely to Outsmart Us Before We Decode the Brain, says Geoffrey Hinton appeared first on Analytics India Magazine.