When Google Thinks It Owns the Internet 

Google has introduced a new proposal “Web Environment Integrity Explainer” drafted by four of its engineers, revolving around the fundamental idea of enhancing “trust and security” in the client environment. It introduces a new API that enables websites to request a token, providing evidence about the client code’s surroundings.

In short, Google is killing ad blockers.

No matter how easy or a positive move it seems at first glance, it has sparked controversy in the tech community for being a huge red flag in privacy rights.

Digging Deeper

According to Google’s engineers, websites will have the freedom to trust or not trust the information provided by the token. Usually, the information will come from the operating system, but it’s not mandatory. Different operating systems may use the same information source.

The goal of this tool is to help websites “detect fraud and ensure the authenticity of devices” and software. They want to create a strong and long-lasting solution to prevent abuse.

The proposed API, despite being presented as a tool to foster trust, could potentially be exploited to control user behavior on the web. It might serve as a concealed introduction of Digital Rights Management (DRM) into web pages, rendering ad-blocking nearly impossible. It can force users to use fully-locked down devices or prove their authenticity to access online content.

This also raises concerns revolving around the possibility of monopolistic control, where tech giants like Google may manipulate trust scores by controlling “attesters” responsible for verifying client environments. Google can easily exploit this by favoring Chrome as the attester, subtly promoting Chrome’s dominance over other browsers like Firefox. The “attesters” are the ones who decide if your device and browser can be trusted to access certain websites. This setup is meant to restrict your freedom to choose any browser you like on any operating system, which could harm the open web.

https://twitter.com/0xDPJ/status/1683041313531523072/photo/1

Google is Now an Ad-Blocking Company

The rise of ad-based business models has been a driving force behind the development of the web, enabling the growth of numerous online platforms. Unfortunately, this trend has also led to the overwhelming influence of a single web browser controlled by Google so it has created a situation where majority of the web’s development and specifications are subject to the decisions made by the big tech.

As per reports, Google’s Chrome dominates the browser market with 63% while Apple’s Safari ranks second with a market share of 20.72%. Following closely behind are Microsoft Edge and Opera, holding market shares of 5.31% and 2.82% respectively. Mozilla’s Firefox holds a market share of 2.77%.

Google’s main source of revenue in 2022 was search ads. Out of the impressive total of $279.81 billion earned by the company, an astonishing $162.45 billion came from search ads alone. In addition to this, Google and YouTube introduced two AI-powered ad solutions called Demand Gen and Video View.

Google’s dominant position in the web browser market, with its massive user base, makes it seem invincible to many. This will not just impact Chrome users but also alternative browsers like Brave or Edge, which are also built on the same open-source web browser project Chromium. As a result, traditional methods of advocacy and promoting alternatives, such as the Firefox browser, is useless.

User Identity at Stake

It is a human right that everyone has the right to privacy online, encompassing the freedom from surveillance, the right to use encryption and be able to protect anonymity online. Each person should also have control over their personal data, including how it is collected, stored, used, deleted, and shared.

The API defies all of these at one go which is a huge breach of privacy.

The API allows websites to request a token that provides information about the user’s device and software stack, an indication that Google can harvest more data about users than they need for normal website functionality.

While the proposal states that the tokens will not include unique identifiers, there is still a risk of potential misuse of this system. If websites start associating specific devices with user behavior, it could lead to profiling. This profiling might be used for targeted advertising or even discriminatory practices, which raises ethical and privacy concerns.

The proposal also does not mention explicit user consent for the collection and use of this information. The API could also potentially undermine users’ ability to browse the web anonymously, especially if websites can link specific device and software information with individual users over time.

Another pressing concern is how to prevent attesters from using the system to exclude certain vendors. The proposal vaguely mentions requiring attesters to offer their services under the same conditions to all browsers meeting certain baseline requirements, but specifics on setting these requirements and enforcing them remain unclear.

Just a couple of weeks back, Google quietly made changes to its privacy rules and revealed that it collects information from public sources on the internet to make its AI services like Bard and Cloud better.

So it is not a surprise that your data is the reason why LLMs like PaLM-2 will just get bigger and better over time.

So, proceed with caution!

Read more: Why Google Is Killing Itself

The post When Google Thinks It Owns the Internet appeared first on Analytics India Magazine.

Official ChatGPT app for Android to roll out this week, and you can pre-register now

The official ChatGPT app in Google Play

Android users jealous because OpenAI's ChatGPT app so far works only on the iPhone need be jealous no longer. In a tweet posted Friday, OpenAI announced the arrival of a ChatGPT app for Android slated to roll out this week. Eager users can pre-register for the app at Google Play to make sure they're notified as soon as it's released.

OpenAI didn't provide many details about the app at its Google Play page other than to say that it will be free, will sync your chat history across devices, and will bring the newest model improvements.

Also: How to use ChatGPT: Everything you need to know

But if the Android flavor works like its iOS counterpart rolled out this past May, the app should let you speak your queries in addition to typing them. Plus, it will be aimed at both free ChatGPT users and ChatGPT Plus subscribers who gain access to early features and the enhanced capabilities of GPT-4.

With the popularity of ChatGPT, a host of third-party developers have launched their own AI chatbot apps for Android and iOS, virtually all powered by OpenAI's ChatGPT model. But many of the apps are freemium products. That means you receive a limited number of daily chats or requests for free. Want more, and you'll have to pony up money for a monthly or yearly subscription. In contrast, the official ChatGPT is truly free, giving you more than enough chats and AI assistance to fill your day.

Also: How does ChatGPT actually work?

The iOS version of the app offers the same capabilities as the website, letting you submit requests to ask questions, find information, and create content. But there are a couple of bonus features, such as voice dictation and chat history syncing across the app and the website. You're also able to export the text of your chats and view them in an HTML file. And paid ChatGPT Plus subscribers can easily switch between GPT-3.5 and GPT-4.

Artificial Intelligence

You can now chat with a famous AI character on Viber. Here’s how

Kuki AI on Viber

Typically, you use a messaging app like Viber to communicate with family, friends, and loved ones. Now, you will be able to contact an extra individual, and it happens to be a robot.

On Monday, Rakuten Viber announced a new partnership with ICONIQ to bring you a robot friend you can chat with, talk to, and list directly on the Viber app.

Also: The best AI chatbots

ICONIQ created Kuki, an AI character whose sole purpose is to entertain humans and has even been used as a brand ambassador for H&M, modeled for Vogue, and starred in its own Roblox game.

Now, Kuki AI will be integrated into Viber and participate in open-ended conversations that provide both companionship and entertainment "similar to a friend," according to the release.

"AI is becoming more prevalent within the messaging app industry and we are excited to continue to support the development of this emerging technology," says Ofir Eyal, CEO of Rakuten Viber. "AI companions, like Kuki, will provide our users with an entirely unique entertainment experience."

Also: Do you use Snapchat's AI chatbot? Here's the data it's pulling from you

As opposed to most chatbots we typically use, such as ChatGPT, Kuki's purpose seems to be to entertain rather than act as an assistant.

Kuki is already available to Viber users and can be used 24/7 for a range of different activities including getting a tarot card reading, playing a game, getting a horoscope, or just hanging out.

To find Kuki, all you have to do is search in the chat function of the app or go to its explore page. There, you can subscribe to the chatbot and start chatting.

Artificial Intelligence

Sam Altman’s Worldcoin Is Live Now

Founded in 2019 by OpenAI chief executive Sam Altman, Max Novendstern, and Alex Blania, Worldcoin has finally started rolling out its services from today onwards.

“If successful, we believe Worldcoin could drastically increase economic opportunity, scale a reliable solution for distinguishing humans from AI online while preserving privacy, enable global democratic processes, and eventually show a potential path to AI-funded Universal Basic Income (UBI),” read the official blog post.

San Francisco and Berlin-based startup Tools for Humanity is behind this.

Worldcoin is using an eye-scanning device called ‘orb’ to tell humans apart from robots and give them tokens. However, there’s a problem. US regulators are concerned about digital assets like cryptocurrencies being misused for speculation and fraud. Because of this, Worldcoin tokens won’t be available in the US at first.

Under the Hood

Worldcoin is composed of two main components: World ID, a privacy-focused digital identity, and where allowed by the law, a digital currency called WLD, which individuals receive simply for being human. The team hopes that, in places with unclear regulations, like the U.S., efforts will be made to extend these benefits to more people.

The price of WLD went up when trading began on Monday. It reached its highest point at $5.29 on Binance, which is the world’s largest exchange. At 1000 GMT, it was trading at $2.49, starting from a low price of $0.15. The total trading volume for WLD was $25.1 million, as reported on Binance’s website.

The startup has gathered around $250 million in total and has support from investors like Andreessen Horowitz, Khosla Ventures, and Reid Hoffman. In 2021, the company introduced its token (WLD) to build a more equitable global internet-driven economy. It’s a Layer 2 Ethereum-based cryptocurrency with its own economy. Worldcoin aims to verify human identities online using World ID, combating bots and fake identities with iris scanning. They raised $25 million in October 2021 and another $100 million within six months, pushing the token’s value to $3 billion. In May 2023, they announced an additional $115 million in funding to invest in bot detection, research, and expanding the project, which already onboarded approximately two million users during its beta phase.

High Hopes on Crypto

Despite concerns about privacy, the ChatGPT maker Altman believes in the potential of crypto. Cryptocurrencies have a history of being unstable, and recent events like the collapse of FTX affected the crypto markets negatively. However, the market is recovering.

The future growth of cryptocurrencies is uncertain due to external factors and potential regulations, but overall, the industry is expected to expand. Many leaders are investing in crypto products, which indicates their confidence in the future success of digital currencies. This positive anticipation draws parallels to Greesham’s law, where a stronger currency pushes weaker ones out of circulation.

The post Sam Altman’s Worldcoin Is Live Now appeared first on Analytics India Magazine.

Free Generative AI Courses by Google

Free Generative AI Courses by Google
Image by Author

Before we get into the free courses, let me quickly provide you with a simple definition of Generative AI. Generative AI can generate text, images, or other forms of media based on user prompts. It can produce new content, replace repetitive tasks, work on customized data, and more. For example, PandasAI was released not long ago — a generative AI python library which integrates generative AI capabilities into Pandas for simpler data analysis.

Just like PandasAI, we expect to see more Generative AI tools and software be released and integrated into our everyday lives to make processes simpler and smoother.

Let’s now talk about the FREE courses about Generative AI that are being provided by Google.

Google’s Generative AI Learning Path

Google created the Generative AI learning path, which consists of a collection of courses surrounding Generative AI products and technologies. You will learn about the fundamentals of Large Language Models (LLMs) as well as being able to create and deploy Generative AI solutions on Google Cloud.

The learning path includes the following 10 courses:

1. Introduction to Generative AI

Link: Introduction to Generative AI

This course will provide you with an overview of the fundamentals of Generative AI. If you’re completely new to Generative AI, this will be the best place to start. You will also learn how Generative AI differentiates from other machine learning methods.

2. Introduction to Large Language Models

Link: Introduction to Large Language Models

With the rise of chatbots such as ChatGPT and Bard, learning about what large language models (LLMs) are, how they are built, their use and prompt tuning is vital information.

3. Introduction to Responsible AI

Link: Introduction to Responsible AI

There have been some recent outrages about how responsible AI is. This course will go through how responsible AI is implemented into Google’s products. You will learn about Google's 7 AI Principles, learning more about social responsibility, accountability and privacy design principles.

4. Generative AI Fundamentals

Link: Generative AI Fundamentals

Once you have completed the first 3 courses, you will then be quizzed on all 3 in the 4th course. Some of you may already have background knowledge and get through this in no time. However, it is good for beginners and people who want to fill in the missing blanks.

5. Introduction to Image Generation

Link: Introduction to Image Generation

A big part of Generative AI is being able to generate images using stable diffusion. In this course, you will learn more about diffusion models, as well as dive into machine learning, deep learning, and convolutional neural nets.

6. Encoder-Decoder Architecture

Link: Encoder-Decoder Architecture

Learn more about the powerful machine learning architecture for sequence-to-sequence tasks — the encoder-decoder architecture. With this, you will be able to understand more about machine translation, text summarization, and question-answering.

This course also includes a lab walkthrough in which you will code a simple implementation of the encoder-decoder architecture for a specific task.

7. Attention Mechanism

Link: Attention Mechanism

I have heard a lot of people talking about wanting to learn more about this topic. The attention mechanism is a technique that allows neural networks to focus on specific parts of an input sequence. To be successful in the course, you will need a good understanding of machine learning, deep learning, natural language processing, and/or Python programming.

8. Transformer Models and BERT Model

Link: Transformer Models and BERT Model

As the terminology gets difficult, at this point you know you will need to be a bit more experienced. In the course you will learn about the main components of transformer models and Bidirectional Encoder Representations from Transformers (BERT).

For example, you will be able to learn more about the self-attention mechanism, and how it is used to build the BERT model, as well as learn about other tasks such as text classification.

9. Create Image Captioning Models

Link: Create Image Captioning Models

It says it in the name. Learn how to create an image captioning model by using deep learning, by breaking down the different components of an image captioning model, such as the encoder and decoder. You will then move onto training and evaluating the model, and will have created your own image captioning models that can generate captions for images.

10. Introduction to Generative AI Studio

Link: Introduction to Generative AI Studio

Last but not least, is the Generative AI Studio. In this course, you will be introduced to walk-through demos of the Generative AI Studio which is used to help prototype and customize generative AI models, so you can use their capabilities in your applications. There is also a hands-on lab at the end and a quiz to test your knowledge.

Wrapping it up

This 10 course learning pathway provided by Google does not only cater to beginners, but also machine learning engineers and data scientists who are looking at a shift in their career or learning new things. It’s better to be up to date than fall behind, and Google offers great resources to help students, employees, and newbies to get there.
Nisha Arya is a Data Scientist, Freelance Technical Writer and Community Manager at KDnuggets. She is particularly interested in providing Data Science career advice or tutorials and theory based knowledge around Data Science. She also wishes to explore the different ways Artificial Intelligence is/can benefit the longevity of human life. A keen learner, seeking to broaden her tech knowledge and writing skills, whilst helping guide others.

More On This Topic

  • Free From Google: Generative AI Learning Path
  • Google Answer to ChatGPT by Adding Generative AI into Docs and Gmail
  • Mastering Generative AI and Prompt Engineering: A Free eBook
  • 5 Free Courses on ChatGPT
  • Top 5 Free Machine Learning Courses
  • Free Data Engineering Courses

PM Modi To Inaugurate ‘Semicon India 2023’, Amid Foxconn-Vedanta Fall Through

Indian Prime Minister Narendra Modi will be inaugurating ‘Semicon India 2023‘, to showcase India’s semiconductor capabilities and chip design innovation, on 28 July at Gandhinagar, Gujarat. Companies including Foxconn, Micron, AMD, and Vedanta will offer deep insights into technologies and innovations in chip-making.

Various dignitaries, including Minister of State for Electronics and IT, Skill Development, and Entrepreneurship Rajeev Chandrasekhar, will be present at the event. Chandrasekhar has been an active voice of the Indian semiconductor industry. At the recent second Semicon India Future Design roadshow held in Bengaluru IISc campus, he announced that India plans to produce a minimum of 85,000 global semiconductor talent in the next two years.

While the Indian government is eyeing the evolving technology, things have not worked out in the country’s favour in the recent past. Earlier this month, Reuters reported that Taiwanese conglomerate Foxconn will pull out of a joint venture with Indian giant Vedanta Ltd that was set up to produce semiconductors in Gujarat.

As per the partnership agreement Vedanta would have held 60% equity while Foxconn would have had 40%. Even though India has partnered with IMEC for technological support, the technology will still require several years of development. The situation presented an uncertain future for the Vedanta-Foxconn venture and now the project has fallen through completely.

While there have been downturns, the Indian government has been optimistic about the investments expected to rise and are already leveraging the talent available here. India is a global leader in engineering design and R&D. It has a favourable demographic advantage with a young workforce and a significant number of technologically-aligned individuals. However, expanding its potential to manufacturing requires more. Unlike the IT sector, the lack of awareness and availability of job options have been some major hurdles facing the industry.

The post PM Modi To Inaugurate ‘Semicon India 2023’, Amid Foxconn-Vedanta Fall Through appeared first on Analytics India Magazine.

OpenAI Takes on Google Bard with ChatGPT App for Android Users

Back in May, OpenAI launched the ChatGPT app for iOS users, two months later they successfully launched the Android version. According to the official announcement on Twitter, the Android release of ChatGPT is scheduled for next week, and eager users can pre-order the app starting today through the Google Play Store.

This launch comes in the middle of countless complaints about the quality of the responses from ChatGPT and a decline of users on the platform. Addressing these concerns, OpenAI assures users that they are actively working on updates to enhance the app’s performance and improve user experience.

Microsoft’s Bing app has been another chatbot option available on both Android and iOS since February. They use Prometheus which is an AI model that combines the comprehensive Bing index, ranking and answer results with the creative reasoning capabilities of OpenAI’s most advanced GPT models.

Will Google Bard Follow?

As the competition in the chatbot realm intensifies, Google’s Bard chatbot, which relies only on a web-based interface, may find itself facing new challenges with the arrival of ChatGPT on Android.

But is Google at a disadvantage? Their strategy seems to be focused on improving the apps that users already access regularly, rather than opting to release a dedicated app for Bard.

Recently Bard announced a list of new updates. The company said, “Bard will become more visual both in its responses and your prompts.” By smartly using the existing features of Google Lens, the chatbot will provide better responses to prompts and also users can use their own images as a prompt in Bard.

Instead of releasing an app, Google detailed the many ways they plan to integrate Bard into “Google apps and services you may already use.” Using Bard, you can write an email and then click the “draft in Gmail button”.

This still leaves a gap with users who wish to use the chatbot from their phones. This could go either way for Google. The previously reported Bard widget would have been sufficient to considerably increase their user base.

The post OpenAI Takes on Google Bard with ChatGPT App for Android Users appeared first on Analytics India Magazine.

OpenAI’s Karpathy Creates Baby Llama Instead of GPT-5

OpenAI’s Karpathy Creates Baby Llama Instead of GPT-5

The person who can easily build GPT-5 over the weekend, is surprisingly spending time testing out the capabilities of open source Llama 2. The quest for running LLMs on a single computer landed OpenAI’s Andrej Karpathy, known for his contributions to the field of deep learning, to embark on a weekend project to create a simplified version of the Llama 2 model, and here it is!

For this, “I took nanoGPT, tuned it to implement the Llama 2 architecture instead of GPT-2, and the meat of it was writing the C inference engine in run.c,” explained Karpathy in Llama2.c GitHub repository. His objective was to implement nanoGPT into Llama 2 architecture, instead of GPT within C programming language. The repository has already got 2.2K stars.

The success of Karpathy’s approach lies in its ability to achieve highly interactive rates, even with reasonably sized models containing a few million parameters and trained on a 15 million parameter model of TinyStories dataset. He reports that on his M1 MacBook Air, the Llama 2 model with ~15 million parameters can infer at around 100 tokens per second in fp32, all through the C code he developed. This result is surprising as it demonstrates the feasibility of running complex models on resource-constrained devices with a straightforward implementation.

Sample Output

Furthermore, in a discussion on HackerNews, Karpathy explains how he was surprised that the compilation on MacBook Air M1 was much faster than anticipated with a speed of 100 tokens per second. Encouraged by this result, Karpathy has been actively updating the repository and also started testing on a 44 million parameter model, which is 3 times larger. Surprisingly, he was able to train 200k iterations with a batch size of 32 on 4 A100 GPUs in around 8 hours.

“With this progress, it seems that achieving the 7B Llama model might be within grasp,” said Karpathy. He has been known for several courses such as building GPT from scratch. People congratulated OpenAI for hiring Karpathy back from Tesla.

What is the Baby-Llama approach?

Karpathy said that this approach was heavily inspired by Georgi Gerganov’s project – llama.cpp, which was almost the same project of using the first version of LLaMA on a MacBook using C and C++.

Karpathy’s approach involves training the Llama 2 LLM architecture from scratch using PyTorch. After training, he saves the model weights to a raw binary file. The interesting part comes next: he writes a 500-line C file, named “run.c,” which loads the saved model and performs inferences using single-precision floating-point (fp32) calculations. This minimalistic approach ensures a low-memory footprint and requires no external libraries, allowing efficient execution on a single M1 laptop without the need for GPUs.

My fun weekend hack: llama2.c 🦙🤠https://t.co/CUoF0l07oX
Lets you train a baby Llama 2 model in PyTorch, then inference it with one 500-line file with no dependencies, in pure C. My pretrained model (on TinyStories) samples stories in fp32 at 18 tok/s on my MacBook Air M1 CPU. pic.twitter.com/aBvKCf1t2u

— Andrej Karpathy (@karpathy) July 23, 2023

Karpathy also explores several techniques to improve the performance of the C code, including different compilation flags like -O3, -Ofast, -march=native, and more. These flags optimise the code by enabling vectorization, loop unrolling, and other hardware-specific tuning. By experimenting with these flags, users can achieve even faster inferences on their specific systems.

To try out the baby Llama 2 model on your own device, you can download the pre-trained model checkpoint from Karpathy’s repository. The provided code will enable you to compile and run the C code on your system, offering a glimpse into the magic of running a deep learning model in a minimalistic environment.

It’s crucial to note that Karpathy’s project is a weekend experiment and not intended for production-grade deployment, which he acknowledges. The primary focus of this endeavour was to demonstrate the feasibility of running Llama 2 models on low-powered devices using pure C code, a language that for a long time has been not regarded as useful for machine learning as it does not involve GPUs.

The Rise of Tiny LLMs

The biggest reason why models have been getting smaller all this while is to train and integrate them on smaller and local devices. Apart from not requiring a GPU, Karpathy’s approach sets a precedent for what can be achieved on single devices. It is possible that through Meta’s partnership, Microsoft will release a bunch of tiny LLMs based on Llama 2.

On similar lines, Meta’s release of Llama 2 also came with an astounding partnership with Qualcomm, a chip manufacturer. This partnership is to make Llama 2 run on local hardware. Apple also has a massive developer ecosystem, for which, the company recently released Transformers architecture which is optimised for Apple Silicon. Karpathy has already shown that a lot is possible.

The post OpenAI’s Karpathy Creates Baby Llama Instead of GPT-5 appeared first on Analytics India Magazine.

Decoding Death Virtually in India

About a year ago, popular comedian Raju Srivastav succumbed to cardiac arrest, post his workout session. The post-mortem of his death was done digitally through a method called virtual autopsy.

Virtual autopsy, also known as virtual post-mortem examination or virtual autopsy imaging, is a modern and non-invasive method of examining a body to determine the cause of death or investigate injuries. This advanced technique uses imaging technologies to provide a detailed analysis of the body without the need for traditional invasive autopsy procedures.

Additionally, virtual autopsies are minimally invasive, enabling the prompt release of the body for cremation or burial. This preserves the dignity of the deceased and provides relief to their families, who may have otherwise received a stitched-up body after the post-mortem examination.

Unlike traditional autopsies, which require incisions and tissue sampling, virtual autopsies are non-invasive and do not require physical alteration of the body. This can be particularly important in cases where cultural or religious beliefs prohibit traditional autopsies.

While it is generally non-invasive, it can be considered minimally invasive in certain situations—like traditional autopsies, it allows the identification of regions of interest in the body to determine the cause of death. However, it is a minimally invasive approach that avoids cutting the entire body. Instead, it focuses on specific areas, such as the thoracic region, making only small incisions when necessary, like for a needle biopsy, to further investigate the cause of death.

The Tech Stack

In an interview with AIM, Ash Govind, Founder & CEO of Virtual Autopsy Solutions, explained that the two software requirements for their work, namely, are visualisation and a forensic information system. Visualisation involves manipulating 3D images of the body, while the forensic information system logs the entire case from the scene of crime or death to the post-mortem examination, including additional examinations like toxicology, microbiology, or DNA testing.

The global medical-legal system aims to determine the probable cause of death in cases of unnatural, sudden, and unexpected deaths. Traditionally, this was done through physical autopsies using a scalpel, but now it can be accomplished through multimedia files that include videos, screenshots, and other digital data in a report.

The process involves using a CT scanner to obtain DICOM data, which is rendered through software to create a 3D reconstruction of the body. Pathologists and radiologists examine the 3D reconstruction to determine the probable cause of death, and this information is then integrated into a multimedia report on the cause of death.

This software works effectively for the purpose of virtual autopsies to an extent where Govind claimed that they conduct approximately 13,000 cases annually in the UK, making the UK the world leader in post-mortem imaging and virtual autopsies.

Credits: RSNA

India Adoption

In 2019, Union health minister Harsh Vardhan expressed interest in AIIMS and ICMR’s initiative to establish a virtual autopsy lab. He emphasised the government’s commitment to developing multiple centres across the country. To support the implementation of virtual autopsies, the Indian Council of Medical Research (ICMR) provided Rs 5 crore to AIIMS, and the process of acquiring a CT machine for the procedure is in progress.

Initially, the virtual autopsy facility will be exclusive to AIIMS, but there are plans to extend it to other medical institutions nationwide, with AIIMS providing training. AIIMS currently conducts approximately 3,000 autopsies per year, as per the statement made to the Lok Sabha.

Virtual Autopsy India is in conversations with AIIMS Delhi and other medical institutions and academia across the country. Other institutions like AIIMS Bibinagar, AIIMS Nagpur, Government Medical College Kashmir, PGIMS Rohtak and a few more are also in the process of setting up a virtual autopsy centre. It’s not just limited to India—globally virtual autopsy India is in discussion with the Dubai Police, and Kazakhstan Government as well.

Recently, in June 2023 The Government Medical College (GMC) Anantnag hosted a groundbreaking two-day conference on ‘virtual autopsy,’ the first of its kind in the Union Territory of Jammu and Kashmir. Organized by the Department of Forensic Medicine, the conference aimed to explore the non-invasive method. Dr Azia Manzoor, the Head of the Department, provided an overview of virtual autopsy, highlighting how it digitally reconstructs detailed images of the body’s internal structures.

Dr Hemant Naik, CMO of Virtual Autopsy India, explained the benefits of virtual autopsy, emphasizing its precision and non-invasiveness. The conference was well-attended, with over 200 delegates from Jammu and Kashmir and other parts of the country, particularly North India.

By involving forensic experts, trained autopsy technicians, and police officers, the process’s effectiveness and accuracy can be further enhanced.

Challenges Facing Adoption of Virtual Autopsy

In a conversation with AIM, Dr Hemant Naik discussed the challenges facing the adoption of virtual autopsies in India.

Firstly, acquiring funds is a major obstacle due to the significant cost involved in setting up the required infrastructure, such as a CT scanner and other technology, including software and hardware. The total cost for each project can amount to more than 10 lakhs, making it essential to secure government funding, which can be challenging.

Secondly, as virtual autopsies are a new technology, medical professionals need proper training to use it effectively. While training initiatives can overcome this challenge, it still requires dedicated effort.

The third challenge lies in the ambiguity within the Indian medical-legal system regarding the acceptance of virtual autopsy evidence in court. As there are no clear regulations, practitioners are uncertain whether virtual autopsy findings will be considered authentic evidence. Although the Indian Evidence Act of 1965 allows digital evidence to be submitted, it does not specifically address medical imaging evidence, creating a grey area in the legal acceptance of virtual autopsy results.

Overcoming these obstacles will be crucial in fully adopting and utilizing virtual autopsies in the Indian medical field.

The Benefits

The advantages of virtual autopsies are numerous. Along with it being non-invasive, virtopsy will also save time compared to conventional autopsies due to quicker imaging processes. Moreover, virtual autopsies preserve the body intact, ensuring that evidence remains intact for potential future investigations. The use of advanced imaging technologies enables detailed visualisations and interactive analysis in three dimensions, aiding in the identification of injuries or abnormalities contributing to the cause of death.

Additionally, virtual autopsies have educational and research benefits. The digital storage of reports allows for future reviews, which is not feasible in cases where cremation is performed. They serve as valuable tools for training forensic pathologists and medical students, and they facilitate research through the storage and analysis of large datasets.

Autopsies play a crucial role in police investigations, especially in cases of unnatural deaths. Traditional autopsies can take anywhere from 30 minutes to three days, depending on the complexity and availability of experts. However, virtual autopsy offers a faster alternative, with the procedure being completed within minutes. Dr Abhishek Yadav, an associate professor of forensic medicine at AIIMS, highlighted the time and manpower-saving benefits of virtual autopsies.

History of Virtual Autopsy

Dr Michael Thali, a professor at the University of Zurich and co-founder of The Virtopsy Project, introduced virtual autopsies in 1999, creating permanent 3-D models of bodies that can be easily accessed and shared for second opinions. This technique has become common practice in Swiss forensic investigations and is gaining popularity worldwide. Although cost is a consideration, the benefits of preserving 3-D information without altering the anatomy outweigh the expense.

The Virtobot system, a robotic tool working with CT scanners, generates high-resolution 3-D images and documents injuries. The visualisation capabilities of virtual autopsies have proven valuable in court cases, aiding in understanding injuries.

While not yet widely used in the U.S., the military and some forensics institutes have adopted virtual autopsies.

India positioned itself to be the first country in the Southeast Asian region to introduce virtual autopsies. Several developed countries, including Switzerland, the UK, Germany, Canada, Australia, Japan, Hong Kong, Norway, Sweden, South Africa, Israel, and Middle-East countries, have already adopted this innovative procedure.

Limitations

However, virtual autopsies have limitations. They may not be suitable for cases requiring histological or toxicological analyses, as these procedures typically require tissue samples. Furthermore, the accurate interpretation of imaging findings relies heavily on the skills and experience of the pathologist or radiologist performing the analysis.

To counter this Naik discussed basic training, where participants are introduced to the technology, learn how to read the upper motor, and understand common findings. The following training is focused on live demonstrations of scanning, reporting scans, performing biopsies, and conducting post-mortem examinations.

When asked about the accuracy rate, Naik confidently stated that it is up to 98%. However, in January, a study in Germany compared pre-death diagnoses with results from traditional and virtual autopsies. Among 47 patients, both types of autopsies were used, and among 115 patients, only virtual autopsies were performed due to family refusal of standard autopsies. Virtual autopsies confirmed 88% of pre-death diagnoses, while traditional autopsies had a confirmation rate of 93%.

The post Decoding Death Virtually in India appeared first on Analytics India Magazine.

Microsoft TypeChat Will Create The Apps Of The Future

Microsoft is in its AI age. The company has gone on an integration spree, putting OpenAI’s models into Office, Bing, and even Windows. Now, the tech giant is open sourcing a set of tools to bring this level of AI integration to all developers. The Redmond giant recently released TypeChat, a library of software tools that is aimed at augmenting traditional UI with the power of AI.

By targeting the code generation capabilities of LLMs, researchers at Microsoft have created a system that can help AI algorithms to communicate with applications. This way, the user can rely on natural language inputs to interface with software in addition to existing UI elements. What’s more, in true open source fashion, the toolkit can be used with any LLM of the developers’ choosing, opening the floodgates of AI-powered applications to the world.

Just as the company is trying to create the future of computing through the tight integration of AI into applications used by millions every day, it is also enabling a developer movement to enable the larger integration of AI into everyday applications.

TypeChat explained

One of the biggest issues that developers have been dealing with in terms of integrating LLMs into software is that language models rarely give machine-readable text. Even when the user prompts the model to provide structured data, LLMs fall short. However, the team at Microsoft found that these generative AI algorithms perform moderately well at translating user queries into JSON (JavaScript Object Notation) format, which works great for machines.

In the example provided by Microsoft in its blog, the LLM was able to accurately transcribe a customer’s order into a largely error-free JSON output. To ensure that this happens repeatedly, they further refined the technique by adding a type requirement to the output. For example, when asked to respond in a predefined output type called ‘Response’, the LLM creates a far more refined and structured output for the query.

This code can also be validated by the TypeScript compiler, which the researchers were using for the example. This paves the way for a clean, structured, and machine-readable output to be used in an app’s workflow. This method, tentatively termed a ‘response schema’, can be used for a variety of different applications by clearly defining a schema for each use-case. Some applications showcased were sentiment analysis, application creation through an ‘API schema’, and a ‘data schema’ for structured outputs.

As mentioned previously, the project is not only open source, but also has plumbing to plug into different LLMs. While it was built with OpenAI API and Azure OpenAI service, the researchers clearly mentioned that it can be used with “any chat completion-style API”. However, they also stated that TypeChat works best with models that have been trained on both prose and code. TypeChat is just the latest announcement in an environment that is currently seeing a high degree of growth, enabling developers to create the apps of the future.

AI tooling for the future

TypeChat is just the latest in a long line of tooling for LLM integration. Arguably kick-started with the launch of LangChain, this field has seen extreme innovation over the past few months. Even AI agents, which had their time in the sun with offerings like AutoGPT and Baby AGI, can be classified under the category of AI tooling. Also, LLMs have become more capable at parsing large databases thanks to vector database systems like PineCone and Weavite.

Any successful innovation in the software field is only amplified by movements in the developer ecosystem. The power of open dev tools is extremely apparent, as seen by the explosive innovation surrounding Meta’s LLaMA model. Even top executives at tech giants have acknowledged that top AI companies have no moat, and that open source will eventually win out.

According to a survey by Sequoia Capital, only 15% of its respondents have built custom language models from scratch or open source. However, 38% of them are interested in the app development framework around LLMs, and a whopping 94% use pre-trained models. Another interesting takeaway was that 88% of respondents stated that retrieval mechanisms, like vector databases, are a key part of the tech stack.

This shows the hunger that the field has for more developer-focused LLM tooling; an appetite Microsoft seems eager to sate. During the 2023 Microsoft Build conference, the company announced a spate of dev tools, from adopting OpenAI’s plugin standard to super powering the WinML API.

These tools also serve the same vertical as TypeChat, targeted squarely at developers looking to build AI applications. With the rise of this ecosystem, developers will soon have the tools to create applications that can integrate AI with a simple text box, and chaining together multiple models will create an AI the likes of which can only be matched in science fiction.

The post Microsoft TypeChat Will Create The Apps Of The Future appeared first on Analytics India Magazine.