Challenges of Contact Tracing in a Post-COVID World

Vector of a global network of interconnected people

As the world slowly emerges from the COVID-19 pandemic, contact tracing remains critical in preventing the spread of infectious diseases and managing potential outbreaks. Throughout the pandemic, contact tracing played an indispensable role in identifying, isolating, and treating infected individuals, thereby curbing the transmission of the virus.

Despite the widespread vaccination efforts and the gradual return to normalcy, the need for effective contact tracing has not waned in a post-COVID world. The reason is that new variants and pathogens continue to pose risks, making it crucial to identify and manage potential clusters of infections swiftly and efficiently.

In this context, data science and business analytics have emerged as significant contributors to enhancing contact tracing efforts. By leveraging these fields with advanced artificial intelligence and machine learning algorithms, professionals could address the challenges and complexities of contact tracing in an ever-changing global landscape.

Evolving Privacy Concerns

Let’s check out the evolving privacy concerns below:

  1. Balancing Public Health and Privacy Rights

One of the most pressing challenges in contact tracing is balancing safeguarding public health and protecting individual privacy rights. Collecting, storing, and sharing personal data, including health and location information, has raised concerns about the potential misuse or abuse of such sensitive information. As we navigate the post-COVID world, developing and implementing contact tracing systems that respect privacy while maintaining their effectiveness in mitigating the spread of infectious diseases is crucial.

  • Decentralized and Centralized Approaches to Contact Tracing

To address privacy concerns, two primary approaches to contact tracing have emerged: decentralized and centralized. Decentralized contact tracing systems store data locally on users’ devices, minimizing the risk of data breaches and unauthorized access. On the other hand, centralized systems store data on a central server, enabling more efficient data analysis and outbreak management. Each approach has its merits and drawbacks, and the choice between them often depends on a country’s legal framework, technological infrastructure, and cultural context.

  • Lessons Learned from the COVID-19 Pandemic: Data Protection and Trust-Building

The COVID-19 pandemic has underscored the significance of data protection and trust-building in contact-tracing efforts. Protecting personal data that is collected, stored, and used responsibly is vital to fostering public trust in contact tracing systems. Transparency in data handling practices and robust data protection measures, such as data anonymization and encryption, can help alleviate privacy concerns and encourage greater public participation. As we face new health challenges in the post-COVID era, the lessons learned from the pandemic will be instrumental in shaping future contact tracing initiatives that respect privacy while promoting public health.

Technological Challenges and Solutions

This section discusses the challenges of integrating data from different sources, stressing the importance of interoperability and harmonization. Furthermore, it emphasizes the significance of data science, artificial intelligence, and machine learning in addressing these technological challenges.

  • The Role of Technology in Contact Tracing: Strengths and Limitations

Technology has played a critical role in enhancing the efficiency and accuracy of contact tracing efforts. Digital contact tracing apps, location data, and other technological innovations have helped streamline the process of identifying and notifying individuals who may have been exposed to an infectious disease. However, these technological solutions also have challenges, including data accuracy, accessibility, and compatibility.

  • Integrating Data from Different Sources: Interoperability and Data Harmonization

One major challenge in contact tracing is integrating data from various sources, such as healthcare systems, digital contact tracing apps, and manual contact tracing efforts. Ensuring interoperability and harmonizing data from these disparate sources is crucial for effective contact tracing, as it enables a more comprehensive understanding of disease transmission and helps guide targeted interventions. Professionals with expertise gained from a data science course can play a vital role in addressing these challenges by developing algorithms and data integration frameworks that enable seamless data sharing and analysis.

  • The Significance of Data Science in Addressing Technological Challenges

Data science has emerged as a critical field in addressing the technological challenges associated with contact tracing. By taking a data science course, professionals can acquire the skills to analyze large and complex datasets, design sophisticated algorithms, and develop innovative solutions to improve contact tracing efforts. Equipped with a strong foundation in data science, these professionals can contribute to overcoming the technological hurdles in contact tracing, ultimately helping to safeguard public health in a post-COVID world.

  • The Use of Artificial Intelligence and Machine Learning in Contact Tracing

Artificial intelligence (AI) and machine learning (ML) have proven to be powerful tools in enhancing contact tracing efforts. These technologies can process massive amounts of data, identify patterns, and make predictions, which are crucial in the early detection and management of infectious disease outbreaks. By learning AI and ML and incorporating them into contact tracing systems, public health authorities can better anticipate and respond to new pathogens and variants as they emerge. Contact tracing systems can become more agile and responsive, which makes adjusting to the evolving public health landscape easier.

International Collaboration and Coordination

Let’s discuss the value of sharing best practices and lessons learned to improve contact tracing systems worldwide:

  • The Importance of Global Cooperation in Contact Tracing Efforts

In a globalized world, infectious diseases know no borders. Therefore, effective contact tracing efforts must extend beyond individual nations, requiring international collaboration and coordination. Global cooperation is essential for sharing information on new pathogens and variants, monitoring cross-border transmission, and implementing coordinated public health measures. By working together, countries can better understand the spread of infectious diseases, identify potential risks, and develop targeted interventions to protect global public health.

  • Sharing Best Practices and Lessons Learned

International collaboration allows countries to share best practices and lessons from contact tracing efforts. By exchanging knowledge, resources, and experiences, public health authorities can identify and adopt effective strategies that have proven successful in various contexts. This collaborative approach allows for the continuous improvement of contact tracing systems worldwide, fostering innovation and driving better outcomes in infectious disease control.

  • Overcoming Barriers to Effective International Collaboration

Despite the clear benefits of international collaboration in contact tracing, various barriers can impede effective cooperation. These obstacles may include differences in legal frameworks, data privacy regulations, technological infrastructure, and cultural norms. In order to overcome these barriers, countries must engage in open dialogue, build trust, and work towards harmonized policies and standards that facilitate data sharing and cooperation. By addressing these challenges and fostering a spirit of global collaboration, we can build a more resilient and responsive contact tracing system that is better equipped to confront the public health threats of the post-COVID world.

Wrapping up

The ongoing importance of contact tracing in a post-COVID world cannot be exaggerated, as new pathogens and variants continue to pose risks to global public health. By investing in the development of innovative contact-tracing solutions and fostering a skilled workforce, we can create more effective and adaptive contact-tracing strategies.

The lessons learned from the COVID-19 pandemic provide valuable insights that can guide the continuous improvement of contact tracing systems. By embracing collaboration and innovation, we can ensure that contact tracing remains a powerful tool in protecting public health and navigating the new normal in a post-COVID world.

This AI Startup is Building Better Tools Than AWS Sagemaker and Google AutoML

There are different generations of tools when it comes to automated machine learning. Legacy tools such as AWS Sagemaker, Google AutoML etc have existed for quite some time now and the key goal of these tools was to help machine learning engineers and people that are technical themselves.

But, now there are even better tools. Nirman Dave, CEO at Obviously AI claimed that their models are better than those of AWS Sagemaker or Google AutoML.

Based in San Francisco, Obviously AI specialises in building no-code AI models for businesses. Dave founded the startup with a mission to transform every company into an AI company, boasting of having developed the most rapid and accurate no-code AI tool to date.

“So what happens is that the customer in just a few minutes gets access to these machine learning models that they can build or customise for themselves,” explained Dave.

However, on the other hand, legacy tools that have been around for nearly 13-15 years now and are very slow at building models. “What’s really special about what we do is we build the models in less than a minute. So we’re the fastest tool to build AI models today” Dave added.

Nonetheless, Dave also acknowledges that his company is not the only one building similar tools. But even though there are other companies that have built no-code AI tools, their approach is very different.

“We mostly focus on tabular data, supervised learning. We have now kind of branched into unsupervised learning as well with our custom Large Language Models (LLMs) offering.”

But Levity AI, for example, another company building no-code AI tools, focuses mostly on image, video or audio type of data.

Demand

Founded in 2021, so far Obviously AI has boarded 52 customers. Even though most of them are in the US, the startup has customers in India, Japan, and also South Africa.

“Currently, week over week we’re growing about 15% in terms of customer acquisition,” he said.

One of the big customers the startup has in India is a large consumer bank. Dave revealed that they are building a loan repayment model for this bank. Normally, the underwriting process for the loans takes a significant amount of time when done manually.

“They wanted to build a model that can quickly process the loan, predict default probability and give it out to the underwriters to make a decision.” Dave said.

Manufacturers of AI models

The startup is engaged with mostly mid-market businesses that don’t have a data science team or cannot scale one and also enterprise businesses that might have a backlogged data science team.

“Essentially, we are manufacturers of AI models,” Dave said. The process begins with the customer bringing a collection of data to our platform and specifying their desired prediction or AI model based on that data.

“And we help with everything from data cleaning, to model selection to hyper parameter tuning, to model deployment and management, all of that process is done automatically by the system,” Dave added.

The user interface is also designed for people that are not heavily technical. Further, another thing that Obviously AI provides, and which they excel in, is dedicated data science support.

“So essentially, we work with a lot of data scientists. These are individual practitioners, individuals that run their own consulting firms or folks that are just starting out, and we provide them with connections to these customers that we have,” Dave said.

Impact on jobs

On the contrary, there is a growing concern that no-code tools could have a negative impact on jobs, particularly those that involve coding and software development. As these tools become more sophisticated and accessible, they may reduce the need for traditional coding skills in certain roles, leading to a shift in the job market.

However, Dave believes Obviously AI’s no-code tool is not replacing data scientists, instead it is accelerating the data science process.

“The reason companies like Hewlett-Packard use us is not because they want to get rid of their data science team. It’s actually that they want to accelerate the data science team.

“The goal here is to help them move significantly faster. The data science team is going to take two months to get from raw data to insights and analytics and predictions. Now it’s going to take them a week of a couple of days.”

Besides, a data scientist’s job is not solely to build AI models; but to build a strategy.

“So I don’t think data scientists will be replaced. I think no-code tools will only help them focus more on the strategy which is the most exciting part of the job,” Dave concluded.

The post This AI Startup is Building Better Tools Than AWS Sagemaker and Google AutoML appeared first on Analytics India Magazine.

7  Most Influential Tech Spouses

The world has always been in awe of the power of tech geniuses and their ability to shape the future with their innovations. But what’s even more impressive is that even these brilliant minds cannot deny the power of love.

As per survey, 84% of entrepreneurs rely on their spouse or partner for emotional support, 73% said their spouse or partner helped with the financial management of their business while 68% reported that their relationship with their spouse or partner had improved since starting their business, highlighting that no matter how successful and accomplished one may be, they still need the love and support of their significant other to thrive.

Let’s take a look at some inspiring love stories.

Read more: 7 Most Influential Tech Leaders in Style [Part 1]

Priscilla Chan

Born to Chinese parents in Masacheusetts, Priscilla Chan and Mark Zuckerberg (cofounder and chief of Meta) are high school sweethearts who got married in 2012 and have often acknowledged the importance of shaping each other up, both personally and profesionally. Priscilla is a pediatrician. But what sets this power couple apart is not just their individual success, but their shared commitment to making a positive impact on the world.They have established the Chan Zuckerberg Initiative, a philanthropic organisation dedicated to enhancing global health and education.

Anjali Pichai

We can not talk about love and not mention the Pichai couple. Sundar and Anjali Pichai are one of the most celebrated couples of the Silicon Valley. Like her betterhalf, Anjali also pursued a bachelor’s degree in chemical engineering from IIT-Kharagpur where he met Sundar. She currently works as a business operations manager at software firm Intuit. Sundar attributes his achievement as the chief of Google to the fact that he remained with the company even though he received lucrative opportunities from Yahoo and Twitter, all thanks to Anjali’s persuasion to stay back. The couple has two children – Kiran and Kavya.

Zachary Bogue

Zachary Bogue has been married to the co-founder of software company Sunshine and former president of Yahoo, Marissa Mayer for over 14 years now. Bogue is the managing partner and co-founder of investment firm DCVC.

Bogue is a strong advocate for the potential of deep tech and environmental science to address some of the most persistent real-world issues. Mayer’s estimated net worth is approximately $680 million, while Bogue has amassed over $300 million.

Sean Eldridge

Canadian-American political activist Sean Eldridge married Facebook co-founder Chris Hughes. He founded and leads a liberal advocacy group named Stand Up America. Prior to this, he worked as the political director of Freedom to Marry that worked towards legalising same-sex marriage. In January 2011, Eldridge and Hughes announced their engagement at a gathering that supported the same cause. The couple has a son together. In 2014, Eldridge ran for a seat in Congress representing New York’s 19th district but lost to Republican incumbent Chris Gibson. Together, they are working towards living in a world where everyone has equal rights and opportunities.

Lucinda Southworth

Google co-founder and Alphabet CEO Larry Page found his love in an equally competent Lucinda Southworth who is a research scientist and the sister of model and actress Carrie Southworth. She obtained her PhD in biomedical informatics from Stanford University and has worked as a research scientist at various institutions, such as the National Institutes of Health. They have contributed significantly to philanthropic causes such as Parkinson’s disease research and clean energy initiatives by donating millions of dollars. In spite of their popularity, both Larry Page and Lucinda Southworth like to maintain a low-key lifestyle and avoid the limelight as much as possible.

Miranda Kerr

One of the richest and most stylish couples out there, Miranda Kerr is the wife to entrepreneur and the Snapchat co-founder Evan Spiegel. Kerr gained fame in the early 2000s as a Victoria’s Secret Angel and is now one of the top-earning models globally. Additionally, she founded a wellness business called KORA Organics. Spiegel and Kerr started dating in 2015 and became engaged in July 2016. They now have two children: a son, Hart, and a daughter, Myles. Kerr has been noted saying that Spiegel inspired her to take on bigger challenges: ​​’Why are you spending your energy working for other companies when you should be focusing on your own? You need to take a risk. If you believe in this, put everything into it.”

Natasha Bassett
When it comes to matters of the heart, our Tesla and SpaceX founder Elon Musk has had his fair share of missteps. After weathering multiple heartbreaks, it appears that Musk is reportedly dating Elvis (Elvis Presley’s biopic) star Natasha Bassett as of last year. The 30-year old Australian actor has featured in films like Spy Intervention, The Pale Door, 12 Mighty Orphans and more. Earlier, He tied the knot with Justine Wilson in 2000 and had six children with her before their split in 2008. Musk went on to marry actress Talulah Riley on two occasions, but both marriages eventually ended in divorce. He also dated actress Amber Heard, followed by a partnership with musician Grimes, with whom he had two children. Musk also became a father to twins with Shivon Zilis, a top executive at his company Neuralink, in 2021. Musk has always been vocal about the importance of having a companion and longs for his true love.

The post 7 Most Influential Tech Spouses appeared first on Analytics India Magazine.

What Are LLM Hallucinations? Causes, Ethical Concern, & Prevention

Large language models (LLMs) are artificial intelligence systems capable of analyzing and generating human-like text. But they have a problem – LLMs hallucinate, i.e., make stuff up. LLM hallucinations have made researchers worried about the progress in this field because if researchers cannot control the outcome of the models, then they cannot build critical systems to serve humanity. More on this later.

Generally, LLMs use vast amounts of training data and complex learning algorithms to generate realistic outputs. In some cases, in-context learning is used to train these models using only a few examples. LLMs are becoming increasingly popular across various application areas ranging from machine translation, sentiment analysis, virtual AI assistance, image annotation, natural language processing, etc.

Despite the cutting-edge nature of LLMs, they are still prone to biases, errors, and hallucinations. Yann LeCun, current Chief AI Scientist at Meta, recently mentioned the central flaw in LLMs that causes hallucinations: “Large language models have no idea of the underlying reality that language describes. Those systems generate text that sounds fine, grammatically, and semantically, but they don’t really have some sort of objective other than just satisfying statistical consistency with the prompt”.

Hallucinations in LLMs

Image by Gerd Altmann from Pixabay

Hallucinations refer to the model generating outputs that are syntactically and semantically correct but are disconnected from reality, and based on false assumptions. Hallucination is one of the major ethical concerns of LLMs, and it can have harmful consequences as users without adequate domain knowledge start to over-rely on these increasingly convincing language models.

A certain degree of hallucination is inevitable across all autoregressive LLMs. For example, a model can attribute a counterfeit quote to a celebrity that was never said. They may assert something about a particular topic that is factually incorrect or cite non-existent sources in research papers, thus spreading misinformation.

However, getting AI models to hallucinate does not always have adverse effects. For example, a new study suggests scientists are unearthing ‘novel proteins with an unlimited array of properties’ through hallucinating LLMs.

What Causes LLMs Hallucinations?

LLMs can hallucinate due to various factors, ranging from overfitting errors in encoding and decoding to training bias.

Overfitting

Image by janjf93 from Pixabay

Overfitting is an issue where an AI model fits the training data too well. Still, it cannot fully represent the whole range of inputs it may encounter, i.e., it fails to generalize its predictive power to new, unseen data. Overfitting can lead to the model producing hallucinated content.

Encoding and Decoding Errors

Image by geralt from Pixabay

If there are errors in the encoding and decoding of text and its subsequent representations, this can also cause the model to generate nonsensical and erroneous outputs.

Training Bias

Image by Quince Creative from Pixabay

Another factor is the presence of certain biases in the training data, which can cause the model to give results that represent those biases rather than the actual nature of the data. This is similar to the lack of diversity in the training data, which limits the model’s ability to generalize to new data.

The complex structure of LLMs makes it quite challenging for AI researchers and practitioners to identify, interpret, and correct these underlying causes of hallucinations.

Ethical Concerns of LLM Hallucinations

LLMs can perpetuate and amplify harmful biases through hallucinations and can, in turn, negatively impact the users and have detrimental social consequences. Some of these most important ethical concerns are listed below:

Discriminating and Toxic Content

Image by ar130405 from Pixabay

Since the LLM training data is often full of sociocultural stereotypes due to the inherent biases and lack of diversity. LLMs can, thus, produce and reinforce these harmful ideas against disadvantaged groups in society.

They can generate this discriminating and hateful content based on race, gender, religion, ethnicity, etc.

Privacy Issues

Image by JanBaby from Pixabay

LLMs are trained on a massive training corpus which often includes the personal information of individuals. There have been cases where such models have violated people’s privacy. They can leak specific information such as social security numbers, home addresses, cell phone numbers, and medical details.

Misinformation and Disinformation

Image by geralt from Pixabay

Language models can produce human-like content that seems accurate but is, in fact, false and not supported by empirical evidence. This can be accidental, leading to misinformation, or it can have malicious intent behind it to knowingly spread disinformation. If this goes unchecked, it can create adverse social-cultural-economic-political trends.

Preventing LLM Hallucinations

Image by athree23 from Pixabay

Researchers and practitioners are taking various approaches to address the problem of hallucinations in LLMs. These include improving the diversity of training data, eliminating inherent biases, using better regularization techniques, and employing adversarial training and reinforcement learning, among others:

  • Developing better regularization techniques is at the core of tackling hallucinations. They help prevent overfitting and other problems that cause hallucinations.
  • Data augmentation can reduce the frequency of hallucinations, as evidenced by a research study. Data augmentation involves augmenting the training set by adding a random token anywhere in the sentence. It doubles the size of the training set and causes a decrease in the frequency of hallucinations.
  • OpenAI and Google’s DeepMind developed a technique called reinforcement learning with human feedback (RLHF) to tackle ChatGPT’s hallucination problem. It involves a human evaluator who frequently reviews the model’s responses and picks out the most appropriate for the user prompts. This feedback is then used to adjust the behavior of the model. Ilya Sutskever, OpenAI’s chief scientist, recently mentioned that this approach can potentially resolve hallucinations in ChatGPT: “I’m quite hopeful that by simply improving this subsequent reinforcement learning from the human feedback step, we can teach it to not hallucinate”.
  • Identifying hallucinated content to use as an example for future training is also a method used to tackle hallucinations. A novel technique in this regard detects hallucinations at the token level and predicts whether each token in the output is hallucinated. It also includes a method for unsupervised learning of hallucination detectors.

Token-level Hallucination Detection

Put simply, LLM hallucinations are a growing concern. And despite the efforts, much work still needs to be done to address the problem. The complexity of these models means it’s generally challenging to identify and rectify the inherent causes of hallucinations correctly.

However, with continued research and development, mitigating hallucinations in LLMs and reducing their ethical consequences is possible.

If you want to learn more about LLMs and the preventive techniques being developed to rectify LLMs hallucinations, check out unite.ai to expand your knowledge.

Top 8 AI Chips for Your Generative AI Play 

In 2023, the introduction of ChatGPT and DALL-E 2, large language models, brought generative AI to the forefront of public attention, resulting in unprecedented levels of excitement around generative AI. This has made chips that can handle AI at a large scale more important than ever.

The AI chip market is expected to grow at a CAGR of 51% and reach $73.49 billion by 2025. Semiconductor companies could capture 40-50% of the total market share. Companies such as Alphabet, Broadcom, Intel, NVIDIA, Qualcomm, Samsung Electronics, and TSMC make chips used to train AI models. According to research, NVIDIA has captured 88% of the GPU market.

Consequently, a lot of users consider NVIDIA as the primary beneficiary of the flourishing generative AI domain. But there are other significant players like Cerebras, Alphabet, and IBM too who have forayed into this domain.

Here’s a list of some of the top AI chips:

Jetson – Nvidia

Jetson is a line of embedded computing boards from Nvidia that are designed to power AI and computer vision applications in edge devices. The Jetson platform includes a range of products, from entry-level development kits to high-performance supercomputers. These boards feature Nvidia’s GPU technology, as well as CPU and I/O capabilities, and are optimised for running deep learning models and other AI algorithms. Jetson boards are commonly used in applications such as autonomous robots, drones, medical devices, and industrial automation. Nvidia also provides a software development kit (SDK) and libraries, including CUDA and cuDNN, that enable developers to build and deploy AI applications on Jetson.

Cerebras Systems WSE

The Cerebras Wafer Scale Engine is a specialised chip that accelerates AI workloads. It is a large single chip with 1.2 trillion transistors and 400,000 AI-optimised processing cores that work together to perform AI computations at an unprecedented scale and speed. The chip’s unique design allows it to be easily integrated into existing data centre infrastructure. The WSE has a successor called the WSE-2, which has significant improvements over the original WSE, including more processing cores, improved memory, and performance. Both chips offer new possibilities for AI research and deployment.

Amazon AWS Inferentia

AWS Inferentia is a custom-designed machine learning inference chip developed by Amazon Web Services (AWS) to accelerate the performance of deep learning applications in the cloud. It is specifically designed to optimise the processing of large-scale neural networks used for machine learning inference. AWS Inferentia is built with a high number of on-chip memory and processing cores, enabling it to perform a large number of computations in parallel. This results in faster and more cost-effective inference performance for machine learning models in production. Inferentia is integrated into AWS services, such as Amazon SageMaker and AWS Lambda, allowing users to easily deploy and run machine learning applications in the cloud. AWS also provides a software development kit (SDK) and libraries, such as TensorFlow, to enable developers to build and optimise their machine learning models for Inferentia.

IBM Power10

In August of 2021, IBM announced the Power10, a microprocessor. It has been designed to offer high performance and scalability for enterprise workloads in AI, cloud computing, and hybrid cloud environments. Power10 has 18 billion transistors and is made using a 7nm process technology. It comes with up to 15 processor cores that can execute up to 8 threads simultaneously, enabling it to handle 120 threads concurrently. The chip’s advanced memory features include support for HBM2e memory, which delivers four times the memory bandwidth of DDR4. Additionally, it has new hardware-based security features like transparent memory encryption and secure boot, which provide protection against cyber threats. Power10 is a robust and flexible microprocessor that can meet the requirements of contemporary enterprise workloads, particularly in AI and cloud computing.

Later, during mid 2022, IBM announced the expansion of its Power10 server line, introducing mid-range and scale-out systems to enhance and automate business applications and IT operations.

Qualcomm Hexagon Vector Extensions

Qualcomm Hexagon Vector Extensions (HVX) is a hardware platform developed by Qualcomm for mobile and embedded devices. It is designed to accelerate machine learning and other high-performance computing workloads. HVX is a vector processing unit that processes multiple data elements in parallel with optimised instructions for machine learning workloads. It has a large number of vector registers and supports popular machine learning frameworks like TensorFlow and Caffe. HVX is integrated into Snapdragon processors and available as a standalone DSP, making it a powerful platform for bringing artificial intelligence to a wider range of devices and applications.

Google Edge TPU

The Google Edge TPU is a custom-built chip designed to accelerate machine learning workloads at the edge of the network. It works with TensorFlow Lite and is specifically designed for performing inference on low-power devices like IoT sensors and cameras. The chip can perform up to 4 trillion operations per second while consuming only a few watts of power. It can run pre-trained models for image and object recognition, natural language processing, and more. Google provides a software development kit and APIs for easy integration into applications. The Edge TPU is an energy-efficient solution for real-time inference and analysis of data at the edge of the network.

TI Cavium CN99xx Thunder X2 CPU

The TI Cavium CN99xx Thunder X2 CPU is a multi-core processor designed for data centre and cloud computing applications. It features up to 54 custom-designed cores, up to 3.0 GHz clock speed, and up to 1 terabyte of memory, with integrated hardware acceleration for encryption, compression, and virtualisation. The Thunder X2 CPU is optimised for high-performance computing workloads, supports virtualisation, and is compatible with various operating systems and standard server hardware components. Overall, it is a powerful and energy-efficient processor designed for high-performance computing applications in data centres and the cloud.

LG Neural Engine

The LG Neural Engine is a hardware-based AI accelerator chip that enhances the performance of LG’s smart devices. It can perform complex machine learning tasks and uses a combination of hardware and software, including deep learning algorithms. The Neural Engine can process data locally, without relying on cloud connectivity, and is energy-efficient to help extend battery life. It is integrated into LG’s proprietary operating system and works seamlessly with the device’s CPU to optimise performance and power consumption. Overall, the LG Neural Engine improves the user experience and enables faster and more accurate AI-driven features.

The post Top 8 AI Chips for Your Generative AI Play appeared first on Analytics India Magazine.

Intel Puts a Happy Face on Its Worst Quarterly Loss Ever

Intel Puts a Happy Face on Its Worst Quarterly Loss Ever April 28, 2023 by Agam Shah

Intel posted its worst quarterly loss in history on Thursday, but the chipmaker took a bold move to put a positive spin on the grim news.

“We delivered solid first-quarter results, representing steady progress with our transformation,” said Pat Gelsinger, Intel's CEO, in a press release.

But the results were anything but solid, with double-digit declines in overall revenue and an unprecedented triple-digit decline in earnings per share.

The company recorded a loss of $2.8 billion, a decline from a profit of $8.1 billion in the same quarter last year. The net loss was on quarterly revenue of $11.7 billion, a decline of 36% compared to the first quarter of 2022.

Intel declared the quarterly earnings a success as revenue was $700 million over the guidance provided by the chipmaker. Intel is currently restructuring the company's operations by cutting product lines, reorganizing divisions, and laying off employees.

Intel’s transformation revolves around becoming a manufacturing-first company by 2025. Gelsinger said the chip market is expected to be a $1 trillion opportunity by 2030.

"We continue to make progress on our commitment to reduce costs and drive efficiencies. We are well on our way towards our goal of reducing $3 billion in costs in 2023, and $8 to $10 billion in annual savings exiting 2025," Gelsinger said during an earnings call.

Revenue for the client computing group was $5.8 billion, down 38% year over year, and the Data Center and AI group revenue was $3.7 billion, which is a year-over-year decline of 39%.

The revenue declines were due to a slump in demand for PC and server chips, which were hurt by macroeconomic headwinds. The demand environment was challenging across the board, and the inventory levels piled up, which reduced demand for its chips.

It was a down quarter for server chips in enterprise and cloud, and that will continue to be the case for the first half of this year, Gelsinger said.

Intel expects to resolve the inventory issues in the second half of the year, but "we're being fairly cautious," Gelsinger said.

Gelsinger also acknowledged that it had a lot of work to do on datacenter products, which was beset by delays and poor execution.

"We have to rebuild our customers’ confidence," Gelsinger said, adding that customer feedback indicates "a strong uptick in their belief that Intel's execution machine is back for their datacenter products of the future."

Next year's server chips, which include Sierra Forest and Granite Rapids, have shipped to customers earlier than expected, Gelsinger said.

The testing of those chips is at the volume validation phase, where customers are receiving enough samples that they can start to do broad validation of the platform.

"That validation cycle is very critical for us because it informs us of when we're ready to move forward with the production steppings of those parts and both the software and the firmware of the platform. We are seeing a very good response," Gelsinger said.

Intel expects to ship the 5th Gen Xeon chip codenamed Emerald Rapids later this year.

The chipmaker is also trying to make its AI chips broadly available. Its Gaudi2 chip is now shipping, and Gelsinger said the Gaudi3 AI chip taped out in the first quarter.

"We're describing to customers our 2025 platform, the Falcon Shores product, which … brings together the full offering of our HPC and AI into a single platform offering. Customers are responding very well," Gelsinger said.

Intel is starting to show up in this AI space, but "we have a lot of work to do to land meaningful revenue [and] customers in this area," he said.

Intel's main competition in the AI chip market includes Nvidia and AMD. The rivals are generating revenue through GPUs and FPGAs.

Intel is trying to advance four nodes in five years, and the chipmaker highlighted some recent announcements. The company's foundry services division has partnered with Arm to make chips on the 18A process, which is expected to start in 2025.

Related

Want to learn more about prompt engineering? This free course can help

OpenAI and Deeplearning.AI course

ChatGPT's launch sparked a generative AI craze that is quickly changing the AI sector and job market. Workers who can advance developments of future models and improve the ones that exist now are in demand — including prompt engineers.

DeepLearning.AI, a platform dedicated to teaching AI, is partnering with OpenAI to offer a free course in prompt engineering for developers.

Also: I used ChatGPT to write the same routine in these ten obscure programming languages

The one hour long course is free and will teach users how to use a large language model (LLM) to build new and powerful applications, according to the site.

The course is taught by Isa Fulford, a member of technical staff at OpenAI, and Andrew Ng, founder of DeepLearning.AI.

"Generative AI offers many opportunities for AI engineers to build, in minutes or hours, powerful applications that previously would have taken days or weeks," said Ng. "I'm excited about sharing these best practices to enable many more people to take advantage of these revolutionary new capabilities."

Also: How to use ChatPDF: The AI chatbot that can tell you everything about your PDF

Although the course title makes it seem like it is only geared towards developers, the course is deemed beginner friendly. The only user requirement is a basic understanding of Python.

The short course includes lessons on how LLMs work, what the best practices for prompt engineering are, how to use LLM APIs for different tasks, how to write effective prompts and even how to build a custom chatbot.

Also: Boston Dynamics robot dog can answer your questions now, thanks to ChatGPT

If you are interested, you should consider enrolling soon, because the site says it is free for a limited time until DeepLearning.AI leaves its beta stage.

To access the course all you have to do is visit DeepLearning.AI and click the "learn for free" button.

See also

Is this the snarkiest AI chatbot so far? I tried HuggingChat and it was weird

Confused robot

Now that generative AI is widely available for users through artificial intelligence tools like ChatGPT, Bing, and, more recently, HuggingChat, it's easy to both under and overestimate its power. Even with as many daily users as these bots have, most people are still unaware that this kind of tech is within reach.

Also: This new technology could blow away GPT-4 and everything like it

At the same time, if you've used ChatGPT, which is arguably the best AI chatbot available right now, then you're doomed to find most other generative AI chatbots lacking. This has happened to me several times over, most recently, with HuggingChat.

Getting started with HuggingChat

As soon as I started using HuggingChat, I was pleasantly surprised to see that you don't have to create an account to log into the Hugging Face website. Comparatively, ChatGPT can only be used after you create an OpenAI account, which includes providing a valid phone number to verify it.

Also: Why your ChatGPT conversations may not be as secure as you think

All you need to do to start talking to HuggingChat is access its website and you'll land right on the chat window, where you can start typing away.

Hallucinations and misinformation

HuggingChat is in its infancy. The large language model it's trained on (LLaMA) is smaller than ChatGPT's, with about 65 billion parameters, while the older GPT-3 version from OpenAI has 175 billion parameters.

Because of this, you can't expect HuggingChat to perform at the level of ChatGPT not only in accuracy and knowledge, but in its ability to carry a conversation in a natural way.

Also: Want to learn more about prompt engineering? This free course can help

Less training and fewer parameters for a LLM means it hasn't been exposed to as much information as others, making it more prone to hallucinate facts, provide inaccurate responses, and have an unnatural conversation style.

With that in mind, I decided to give a little grace when I first started using it.

HuggingChat doesn't care about your feelings

It really doesn't. This is mostly fine, as I'm not expecting an AI chatbot to cater to my emotions, but HuggingChat has a long way to go to learn how humans express sentiment through natural language.

Also: AI can write your emails, reports, and essays. But can it express your emotions? Should it?

For example, other popular chatbots are programmed to respond to prompts in a respectful manner, and will stop short of making rude or sarcastic comments. HuggingChat is still learning this, as shown in the screenshot below.

One of the first questions I tested HuggingChat on was how it compares to ChatGPT. I learned it doesn't like to be compared to ChatGPT.

After I asked HuggingChat how it compares to ChatGPT, it stated that it's designed to "provide accurate factual information" and then ended with "I actually care about providing answers to queries instead of just filling space; see my responses to the first dozen messages in our conversation."

I couldn't leave it as that, so I called out HuggingChat for being snarky and got one of the few factual responses it's given me, simply saying "Aren't we all?"

Also: People are turning to ChatGPT to troubleshoot their tech problems now

For comparison, calling out ChatGPT for being snarky would be rare as it rarely is, but this is what it replied when I did: "I apologize if any of my responses came across as snarky. As an AI language model, I do not have emotions or intent to be sarcastic or rude. […] If there was something specific that you found snarky, please let me know and I will do my best to improve my communication with you."

I decided to ask HuggingChat what LLM it's running on, to which it replied "I have no idea, someone else runs this for me. Ask them."

When I asked HuggingChat how it compares to ChatGPT, it incorrectly described itself as created by OpenAI, which is incorrect. So, of course, I asked "Are you HuggingChat or ChatGPT?" You can see the wild road HuggingChat went on to answer that question in the screenshot below.

This just left me confused.

At some other points in our conversation, HuggingChat mentioned speaking to humans instead of AI, which I found confusing, saying "To avoid miscommunication, it's better if you just talk directly to the human users instead of trying to message the bot via Open Assistant."

Sometimes HuggingChat thinks it's ChatGPT

Aside from the confusion on who's human and who you're actually talking to, HuggingChat sometimes believes it's ChatGPT — really. The first question I asked the new AI chatbot was "Are you better than ChatGPT?", for which you can see its answers on the screenshot below.

Everyone wants to work with OpenAI, I guess.

It started off saying it is a "GPT language model developed by OpenAI" and doubled down below, when I challenged that statement, saying "I am indeed a language model created by OpenAI, designed to process natural language input and produce human-like responses."

Also: ChatGPT's intelligence is zero, but it's a revolution in usefulness, says AI expert

It's much slower, if it answers at all

HuggingChat is certainly slower than other competitors, likely because it's not running on the large infrastructure that companies like OpenAI and Microsoft have.

I tested how ChatGPT and HuggingChat would answer the same question and gave it identical prompts: "set up a backyard game for a kids birthday party, ages 6-8."

ChatGPT (left) and HuggingChat (right) answering the same prompt.

ChatGPT dove right into the task, came up with the game and outlined materials needed, instructions, and even safety recommendations (not pictured). HuggingChat, however, had a hallucinated conversation and then decided it didn't have time for games, literally, when it responded to itself "no thanks, i need to go buy supplies not use my time looking at ideas on line (sic)."

It's okay, it also gave me a link to a website (<https://www.wikihow.com/Plan-a-Birthday-Party-for-Kids>), which made me a bit hopeful, until I realized it doesn't exist.

Using HuggingChat: it wasn't all bad

We have to recognize how far generative AI has come. If you had told me twenty years ago that we'd have access to an artificially intelligent chatbot that can easily write code, generate text for resumes and letters, create Excel formulas, and more, I would have laughed and said we wouldn't have to wait twenty years for it.

Also: ChatGPT is more like an 'alien intelligence' than a human brain, says futurist

With all the tech advancements we saw from the 1980s through the early 2000s, I'd have figured something like that would probably be widely available before 2010. Now, in 2023, we suddenly have so many options that it's become hard to choose one.

All in all, HuggingChat did a good job in about half of the conversations I had with it, it also handled translations well, and it looks like it could be promising in writing code, especially if you'd rather support an open-source generative AI tool.

HuggingChat is unique in that it is an open-source alternative to closed-source models that don't offer free access to their API. Right now, it's based on Open Assistant, an AI chatbot that you can help train on their website, but HuggingChat is likely to add other models to its platform in the future in an effort for continued improvement.

More on AI tools

How to use ChatGPT to summarize a book, article, or research paper

chatgpt on mobile phone

AI chatbots like ChatGPT can be used to make summarizing long articles, research papers, and books an easier task. If you're supposed to write a summary for school or work on a body of written text, remember that ChatGPT should be used to help you understand a topic rather than to write your work for you.

Also: How to use ChatGPT to write an essay

If you're a student writing a research paper, someone wanting to know more about a lengthy article, or someone who wants to know more about a complicated subject, you can use ChatGPT to simplify the process.

How ChatGPT can create summaries for you

Materials needed: You will need a device that can connect to the internet, an OpenAI account, as well as a URL to an article, research paper, or the title of a book. The process should take about one to three minutes.

This is an accurate summary of the URL I put into the prompt. But I still read the article in its entirety to ensure it's accuracy.

FAQ

What are ChatGPT's limitations?

If you're using ChatGPT to summarize an article, book, or piece of research, keep in mind that ChatGPT isn't aware of events that occurred past 2021.

Also: Want to learn more about prompt engineering? This free course can help

For example, suppose you ask ChatGPT to tell you about Joe Biden's campaign this year to ban TikTok. In that case, the chatbot will tell you, "It is currently unclear what actions, if any, the Biden administration may take regarding TikTok in the future."

If you try to get around this and provide ChatGPT with an article that contains information post-2021, it may hallucinate. Here, I asked the chatbot to summarize an article about a new app I wrote about, and it made up a few details.

Lemon8 is a new app from TikTok's parent company, ByteDance. Although the TikTok trend may exist, that's not what the article is about.

Can ChatGPT summarize a PDF?

If you can open a PDF in your web browser, you can try copying the link and pasting it into ChatGPT. But with using URLs in ChatGPT comes the possibility for the chatbot to hallucinate. It's best to read the PDF and use the chatbot as a summary tool and not as an educator.

If you're looking for an AI chatbot that you can regularly rely on to give you an accurate summary of a PDF, consider using ChatPDF. You can summarize up to three PDFs of up to 120 pages per day and an upgraded plan is available for $5 per month.

Can ChatGPT summarize an email thread?

Sort of. If you want to copy and paste every single email, ChatGPT can summarize the thread's contents for you. It would be more helpful to scan an email thread yourself and ask ChatGPT to help you write a response based on the key points you know about the conversation.

Editor's note: We've added additional context to the step concerning ChatGPT summarizing articles by URL.

More on AI tools

Generative AI might soon face some major copyright limitations from the EU

AI and law mock up

To operate, generative AI models have to be trained on massive amounts of data typically coming from the web. The training data oftentimes includes copyrighted materials which owners have not consented to be used for AI purposes.

The EU's AI Act seeks to change that, according to Reuter reports.

Also: The 5 biggest risks of generative AI, according to an expert

The European Parliament has decided to move forward with the AI Act which the European Commission began drafting two years ago. As part of the next step, EU lawmakers and member states will have to finalize details of the bill.

According to the report, the proposals include classifying AI models according to their risk level, from minimal risk all the way through to unacceptable.

Even if a model is found to be high-risk, it will not be banned but rather, its users would need to be transparent in their operations.

Also: Want to learn more about prompt engineering? This free course can help

Most interestingly, the report shares that under a provision added two weeks ago, AI companies will have to disclose if they are using any copyrighted materials to train or develop their systems

Concerns with copyrighted AI material have been especially high since OpenAI's AI models, ChatGPT and DALL-E skyrocketed in popularity late last year.

The most evident copyright infringement occurred when artists were seeing aspects of their works in AI generated images by DALL-E, which refers back to the large database of photos it was trained on to generate a brand new image.

Also: Is this the snarkiest AI chatbot so far? I tried HuggingChat and it was weird

As AI developments continue to take place, new copyright issues arise. For example, AI-generated songs using artists' voices have been surfacing and causing concerns from both the artists and their labels.

The EU's proposed stricter copyright regulation could set a precedent for similar legislation worldwide and change the future of AI.

More on AI tools