DSC Weekly 2 April 2024

Announcements

  • TechTarget’s Enterprise Strategy Group conducted a survey of IT/DevOps pros and app developers responsible for their organizations’ application infrastructure and found that 63% have modernized their approach to IT service management (ITSM) strategy. The era of the traditional help desk model is a thing of the past, but what does the future of ITSM look like? Attend the upcoming Future of ITSM summit to discover the latest IT service management trends and technologies, including insight into AI-driven service management, cloud ITSM solutions, and IT-style automated workflows for non-IT departments.
  • In today’s constantly evolving digital landscape, networks are the backbone of modern enterprises. The need to prepare for potential network failures by instilling resilience and redundancy is more pressing than ever. Designing a stable, flexible and secure network infrastructure, with real-time visibility across assets and users is critical to maintaining reliability. Tune into the upcoming Strategies for a Resilient Network summit and discover strategies to design an agile, data-driven network that optimizes visibility, enhances DNS management and minimizes disruptions.

Top Stories

  • Ways LCD data science undermines more thoughtful approaches to AI
    April 2, 2024
    by Alan Morrison
    Lowest common denominator (LCD) data science is the unthinking variety of data science that doesn’t question the prevailing wisdom or try to counter it. The unfortunate reality is that LCD data science is much more common and triggers much more damaging side effects than the alternatives.
  • How to Transform Your ML Models with Generative AI
    April 1, 2024
    by Bill Schmarzo
    A “mix-in” is a component or feature added to an existing system or product to enhance its functionality, performance, or complexity without altering its core structure, akin to adding toppings to a dessert to enrich its flavor and appeal. Recently, a customer mentioned their plans to implement Generative AI (GenAI) for predictive maintenance.
  • How the New Breed of LLMs is Replacing OpenAI and the Likes
    March 31, 2024
    by Vincent Granville
    Of course, OpenAI, Mistral, Claude and the likes may adapt. But will they manage to stay competitive in this evolving market? Last week Databricks launched DBRX. It clearly shows the new trend: specialization, lightweight, combining multiple LLMs, enterprise-oriented, and better results at a fraction of the cost. Monolithic solutions where you pay by the token encourage the proliferation of models with billions or trillions of tokens, weights and parameters.

In-Depth

  • How data mapping enhances data governance and lineage
    April 2, 2024
    by Ovais Naseem
    In the big world of handling data, where information moves around in complicated ways, metadata is like a secret protector. It holds important clues about the data world. As data becomes more complex and there’s a lot more of it, organizations are starting to pay attention to metadata management, using tools like data mapping.
  • Blockchain’s role in enhancing data security in marketing
    April 2, 2024
    by Varun Bhagat
    With increasing data breaches and cyber threats, safeguarding marketing data becomes more critical for businesses. According to a recent report by IBM, the average data breach cost is estimated to be $3.86 million. This underscores the urgency for companies to fortify their data security measures.
  • R&D misdirection and the circuitous US path to artificial general intelligence
    April 1, 2024
    by Alan Morrison
    Big tech has substantial influence over the direction of R&D in the US. According to the National Science Foundation and the Congressional Research Service, US business R&D spending dwarfs domestic Federal or state government spending on research and development.
  • Gen AI’s memory wall
    March 31, 2024
    by Alan Morrison
    During an interview by Brian Calvert for a March 2024 piece in Vox, climate lead and AI researcher at Hugging Face Sasha Luccioni drew a stark comparison: “From my own research, what I’ve found is that switching from a non-generative, good old-fashioned quote-unquote AI approach to a generative one can use 30 to 40 times more energy for the exact same task.”
  • DSC Weekly 26 March 2024
    March 26, 2024
    by Scott Thompson
    Read more of the top articles from the Data Science Central community.

InstructIR: High-Quality Image Restoration Following Human Instructions

High-Quality Image Restoration Following Human Instructions

An image can convey a great deal, yet it may also be marred by various issues such as motion blur, haze, noise, and low dynamic range. These problems, commonly referred to as degradations in low-level computer vision, can arise from difficult environmental conditions like heat or rain or from limitations of the camera itself. Image restoration represents a core challenge in computer vision, striving to recover a high-quality, clean image from one exhibiting such degradations. Image restoration is complex because there might be multiple solutions for restoring any given image. Some approaches target specific degradations, such as reducing noise or removing blur or haze.

While these methods can yield good results for particular issues, they often struggle to generalize across different types of degradation. Many frameworks employ a generic neural network for a wide range of image restoration tasks, but these networks are each trained separately. The need for different models for each type of degradation makes this approach computationally expensive and time-consuming, leading to a focus on All-In-One restoration models in recent developments. These models utilize a single, deep blind restoration model that addresses multiple levels and types of degradation, often employing degradation-specific prompts or guidance vectors to enhance performance. Although All-In-One models typically show promising results, they still face challenges with inverse problems.

InstructIR represents a groundbreaking approach in the field, being the first image restoration framework designed to guide the restoration model through human-written instructions. It can process natural language prompts to recover high-quality images from degraded ones, considering various degradation types. InstructIR sets a new standard in performance for a broad spectrum of image restoration tasks, including deraining, denoising, dehazing, deblurring, and enhancing low-light images.

This article aims to cover the InstructIR framework in depth, and we explore the mechanism, the methodology, the architecture of the framework along with its comparison with state of the art image and video generation frameworks. So let’s get started.

InstructIR: High-Quality Image Restoration

Image restoration is a fundamental problem in computer vision since it aims to recover a high-quality clean image from an image that demonstrates degradations. In low-level computer vision, Degradations is a term used to represent unpleasant effects observed within an image like motion blur, haze, noise, low dynamic range, and more. The reason why image restoration is a complex inverse challenge is because there might be multiple different solutions for restoring any image. Some frameworks focus on specific degradations like reducing instance noise or denoising the image, while others might focus more on removing blur or deblurring, or clearing haze or dehazing.

Recent deep learning methods have displayed stronger and more consistent performance when compared to traditional image restoration methods. These deep learning image restoration models propose to use neural networks based on Transformers and Convolutional Neural Networks. These models can be trained independently for diverse image restoration tasks, and they also possess the ability to capture local and global feature interactions, and enhance them, resulting in satisfactory and consistent performance. Although some of these methods may work adequately for specific types of degradation, they typically do not extrapolate well to different types of degradation. Furthermore, whilst many existing frameworks use the same neural network for a multitude of image restoration tasks, every neural network formulation is trained separately. Hence, it is obvious that employing a separate neural model for every conceivable degradation is impracticable and time consuming, which is why recent image restoration frameworks have concentrated on All-In-One restoration proxies.

All-In-One or Multi-degradation or Multi-task image restoration models are gaining popularity in the computer vision field since they are capable of restoring multiple types and levels of degradations in an image without the need of training the models independently for each degradation. All-In-One image restoration models use a single deep blind image restoration model to tackle different types and levels of image degradation. Different All-In-One models implement different approaches to guide the blind model to restore the degraded image, for example, an auxiliary model to classify the degradation or multi-dimensional guidance vectors or prompts to help the model restore different types of degradation within an image.

With that being said, we arrive at text-based image manipulation since it has been implemented by several frameworks in the past few years for text to image generation, and text-based image editing tasks. These models often utilize text prompts to describe actions or images along with diffusion-based models to generate the corresponding images. The main inspiration for the InstructIR framework is the InstructPix2Pix framework that enables the model to edit the image using user instructions that instructs the model on what action to perform instead of text labels, descriptions, or captions of the input image. As a result, users can use natural written texts to instruct the model on what action to perform without the need of providing sample images or additional image descriptions.

Building on these basics, the InstructIR framework is the first ever computer vision model that employs human-written instructions to achieve image restoration and solve inverse problems. For natural language prompts, the InstructIR model can recover high-quality images from their degraded counterparts and also takes into account multiple degradation types. The InstructIR framework is able to deliver state of the art performance on a wide array of image restoration tasks including image deraining, denoising, dehazing, deblurring, and low-light image enhancement. In contrast to existing works that achieve image restoration using learned guidance vectors or prompt embeddings, the InstructIR framework employs raw user prompts in text form. The InstructIR framework is able to generalize to restoring images using human written instructions, and the single all-in-one model implemented by InstructIR covers more restoration tasks than earlier models. The following figure demonstrates the diverse restoration samples of the InstructIR framework.

InstructIR : Method and Architecture

At its core, the InstructIR framework consists of a text encoder and an image model. The model uses the NAFNet framework, an efficient image restoration model that follows a U-Net architecture as the image model. Furthermore, the model implements task routing techniques to learn multiple tasks using a single model successfully. The following figure illustrates the training and evaluation approach for the InstructIR framework.

Drawing inspiration from the InstructPix2Pix model, the InstructIR framework adopts human written instructions as the control mechanism since there is no need for the user to provide additional information. These instructions offer an expressive and clear way to interact allowing users to point out the exact location and type of degradation in the image. Furthermore, using user prompts instead of fixed degradation specific prompts enhances the usability and applications of the model since it can also be used by users who lack the required domain expertise. To equip the InstructIR framework with the capability of understanding diverse prompts, the model uses GPT-4, a large language model to create diverse requests, with ambiguous and unclear prompts removed after a filtering process.

Text Encoder

A text encoder is used by language models to map the user prompts to a text embedding or a fixed size vector representation. Traditionally, the text encoder of a CLIP model is a vital component for text based image generation, and text based image manipulation models to encode user prompts since the CLIP framework excels in visual prompts. However, a majority of times, user prompts for degradation feature little to no visual content, therefore, rendering the large CLIP encoders useless for such tasks since it will hamper the efficiency significantly. To tackle this issue, the InstructIR framework opts for a text-based sentence encoder that is trained to encode sentences in a meaningful embedding space. Sentence encoders are pre-trained on millions of examples and yet, are compact and efficient in comparison to traditional CLIP-based text encoders while having the ability to encode the semantics of diverse user prompts.

Text Guidance

A major aspect of the InstructIR framework is the implementation of the encoded instruction as a control mechanism for the image model. Building on this, and inspired in task routing for many task learning, the InstructIR framework proposes an Instruction Construction Block or ICB to enable task-specific transformations within the model. Conventional task routing applies task-specific binary masks to channel features. However, since the InstructIR framework does not know the degradation, this technique is not implemented directly. Furthermore, for image features and the encoded instructions, the InstructIR framework applies task routing, and produces the mask using a linear-layer activated using the Sigmoid function to produce a set of weights depending on the text embeddings, thus obtaining a c-dimensional per channel binary mask. The model further enhances the conditioned features using a NAFBlock, and uses the NAFBlock and Instruction Conditioned Block to condition the features at both the encoder block and the decoder block.

Although the InstructIR framework does not condition the neural network filters explicitly, the mask facilitates the model to select the channels most relevant on the basis of the image instruction and information.

InstructIR: Implementation and Results

The InstructIR model is end-to-end trainable, and the image model does not require pre-training. It is only the text embedding projections and classification head that needs to be trained. The text encoder is initialized using a BGE encoder, a BERT-like encoder that is pre-trained on a massive amount of supervised and unsupervised data for generic purpose sentence encoding. The InstructIR framework uses the NAFNet model as image model, and the architecture of NAFNet consists of a 4 level encoder decoder with varying number of blocks at each level. The model also adds 4 middle blocks between the encoder and the decoder to further enhance the features. Furthermore, instead of concatenating for the skip connections, the decoder implements addition, and the InstructIR model implements only the ICB or Instruction Conditioned Block for task routing only in encoder and decoder. Moving on, the InstructIR model is optimized using the loss between the restored image, and the ground-truth clean image, and the cross-entropy loss is used for intent classification head of the text encoder. The InstructIR model uses the AdamW optimizer with a batch size of 32, and a learning rate of 5e-4 for nearly 500 epochs, and also implements the cosine annealing learning rate decay. Since the image model in the InstructIR framework comprises only 16 million parameters, and there are only 100 thousand learned text projection parameters, the InstructIR framework can be easily trained on standard GPUs, thus reducing the computational costs, and increasing the applicability.

Multiple Degradation Results

For multiple degradations and multi-task restorations, the InstructIR framework defines two initial setups:

  1. 3D for three-degradation models to tackle degradation issues like dehazing, denoising, and deraining.
  2. 5D for five degradation models to tackle degradation issues like image denoising, low light enhancements, dehazing, denoising, and deraining.

The performance of 5D models are demonstrated in the following table, and compares it with state of the art image restoration and all-in-one models.

As it can be observed, the InstructIR framework with a simple image model and just 16 million parameters can handle five different image restoration tasks successfully thanks to the instruction-based guidance, and delivers competitive results. The following table demonstrates the performance of the framework on 3D models, and the results are comparable to the above results.

The main highlight of the InstructIR framework is instruction-based image restoration, and the following figure demonstrates the incredible abilities of the InstructIR model to understand a wide range of instructions for a given task. Also, for an adversarial instruction, the InstructIR model performs an identity that is not forced.

Final Thoughts

Image restoration is a fundamental problem in computer vision since it aims to recover a high-quality clean image from an image that demonstrates degradations. In low-level computer vision, Degradations is a term used to represent unpleasant effects observed within an image like motion blur, haze, noise, low dynamic range, and more. In this article, we have talked about InstructIR, the world’s first image restoration framework that aims to guide the image restoration model using human-written instructions. For natural language prompts, the InstructIR model can recover high-quality images from their degraded counterparts and also takes into account multiple degradation types. The InstructIR framework is able to deliver state of the art performance on a wide array of image restoration tasks including image deraining, denoising, dehazing, deblurring, and low-light image enhancement.

Blockchain’s role in enhancing data security in marketing

image-3

With increasing data breaches and cyber threats, safeguarding marketing data becomes more critical for businesses.

According to a recent report by IBM, the average data breach cost is estimated to be $3.86 million.

This underscores the urgency for companies to fortify their data security measures. Traditional approaches often fail to address the evolving challenges of data protection in marketing, leaving sensitive information vulnerable to exploitation.

However, utilizing blockchain for data security facilitates a promising solution to these pressing concerns.

This technology enables secure and transparent data transactions by providing a decentralized and immutable ledger, mitigating the risks associated with centralized data storage systems.

Blockchain’s data security mechanisms

Blockchain and data security are crucial to modern digital ecosystems, providing safe and transparent transactions. It bolsters data security within marketing operations. Through its unique features of decentralization and encryption, blockchain protects sensitive data from unauthorized access or tampering. Let’s say a digital ledger where information is stored across computer networks rather than a centralized server. Blockchain’s decentralized nature makes it virtually impossible for any single entity to control or manipulate the data. Each transaction or data you make is recorded in a block and linked to the previous one. It creates an immutable and transparent chain of blocks. But how does blockchain ensure data security?

Let’s delve deeper:

  • Decentralization

Traditional databases need a central authority to manage and check transactions. This can make them easy targets for cyberattacks or data leaks. In contrast, using blockchain for data analytics distributes data across multiple nodes, eliminating the risk of a single point of failure. This decentralized setup makes it highly challenging for hackers to compromise the integrity of the data.

  • Encryption

Blockchain uses smart techniques to keep data secure during transactions. Every block gets a special digital signature, making sure only authorized parties can change or view it. This cryptographic protection safeguards sensitive marketing data from unauthorized tampering or interception.

Challenges in Marketing Data Security

Data security is crucial for marketing teams to protect sensitive information and maintain customer trust.

image-1

In this section, we explore the challenges marketers face and the limitations of traditional security methods.

  • Data Breaches

Ever wondered how easily customer information can fall into the wrong hands? Data breaches pose a significant threat, jeopardizing trust and brand reputation.

  • Cyberattacks

Imagine your competitor accessing your marketing strategies overnight. Cyberattacks, including phishing and malware, are ever-looming threats in the digital realm.

  • Compliance Issues

Are you confident your marketing practices comply with regulations like GDPR or CCPA? Non-compliance can lead to heavy fines and tarnish your brand’s image.

  • Third-party Risks

Do you trust every vendor handling your marketing data? Third-party services introduce additional vulnerabilities, raising concerns about data misuse or unauthorized access.

  • Data Fragmentation

How streamlined is your data management process across various marketing channels? Fragmented data systems make it challenging to maintain consistency and control.

Traditional data security methods have limitations in addressing these challenges.

While firewalls and encryption are essential, they may not provide the robust protection needed in today’s sophisticated threat landscape. Consider a scenario where a marketing database containing customer information is compromised due to a phishing attack. Despite having encryption measures in place, the attackers exploit vulnerabilities, leading to significant data exposure.

Similarly, relying solely on password protection for access control may prove inadequate when facing targeted cyber threats. Hackers adept at social engineering can manipulate employees into divulging sensitive information, bypassing traditional security measures.

Marketing teams need a more robust solution to evolving threats. Blockchain for data security emerges as a promising contender, offering immutable data storage and decentralized consensus mechanisms. In the following sections, let’s explore how blockchain can revolutionize data security in marketing.

Blockchain Solutions for Marketing

Ensuring the security of marketing data is crucial to maintaining customer trust and safeguarding sensitive information.

However, traditional data security measures often fail to address the evolving threats marketers face.

image

Here, we explore how secure blockchain technology offers innovative solutions to enhance data security in marketing.

1. Immutable Data Storage

Traditional databases are vulnerable to tampering and unauthorized access, posing significant risks to marketing data integrity. Blockchain’s decentralized and immutable ledger ensures that once data is recorded, it cannot be altered or deleted without consensus from the network participants. This feature makes blockchain ideal for securely storing customer profiles, transaction records, and campaign analytics.

For instance, Coca-Cola utilizes blockchain to securely record and track its supply chain data, ensuring the authenticity of its products and combating counterfeit goods.

2. Smart Contracts for Secure Transactions

Smart contracts are self-executing agreements. They are programmed to execute and enforce terms when predefined conditions are met. In marketing, smart contracts can streamline payment processes, automate contract management, and ensure compliance with regulatory requirements. By leveraging blockchain’s transparent and auditable nature, marketers can mitigate the risk of fraud and transaction disputes.

One example is AdEx, a decentralized advertising platform that uses smart contracts to facilitate transparent and fraud-resistant ad transactions. Struggling to implement blockchain technology in your business? You can choose to hire blockchain developers to alleviate the complexity and get seamless integration.

3. Enhanced Identity Management

Identity theft and fraud are common dangers online, making people doubt the trustworthiness of digital marketing. Blockchain has a way to manage identities safely without one main authority. It uses special math tricks to check who users are without needing one big boss, making it harder for hackers to steal information.

An example of this is Civic, a tool using blockchain to check identities securely, keeping data safe from hackers and fraudsters.

4. Transparent Supply Chain Tracking

In industries like retail and manufacturing, supply chain transparency is essential for ensuring product authenticity and combating counterfeit goods. Blockchain enables end-to-end traceability by recording every transaction and movement of goods on an immutable ledger. Marketers can use blockchain for data security to verify the authenticity of products, track their journey from manufacturer to consumer, and enhance brand trust through transparent supply chain management.

IBM Food Trust is a prime example. This modular food traceability solution utilizes blockchain to boost traceability and transparency in the food supply chain. It allows consumers to verify the origin and quality of products.

5. Decentralized Ad Networks

Centralized ad networks often face challenges related to ad fraud, click manipulation, and opaque reporting practices. Blockchain-powered ad networks offer transparency and accountability by recording ad impressions, clicks, and conversions on a tamper-proof ledger. Marketers can gain real-time insights into ad performance, verify the authenticity of traffic, and ensure fair compensation for publishers and advertisers.

Brave Browser’s Basic Attention Token (BAT) ecosystem leverages blockchain to reward users for their attention and incentivize ethical advertising practices, fostering a more transparent and fair digital advertising ecosystem.

Step-by-Step Implementation Process

As we all know, data security is vital for marketers. With increasing data breaches and cyberattacks, safeguarding sensitive information has become a top priority. Blockchain offers a robust solution for enhancing data security in marketing.

But how can businesses effectively implement blockchain into their existing marketing systems?

image-2

Let’s quickly look at implementation strategies and tips to ensure a smooth transition and maximize the advantages of blockchain technology.

Assess Your Current Infrastructure

  • Start by thoroughly assessing your existing marketing systems and processes.
  • Identify areas where data security vulnerabilities exist, and blockchain can provide added protection.

Identify Suitable Use Cases

  • Determine specific use cases within your marketing operations where blockchain can benefit most.
  • Focus on customer data management, digital advertising, and supply chain transparency.

Collaborate with IT and Security Teams

  • Engage your IT and security teams early in planning to ensure alignment and buy-in.
  • Collaborate closely to develop a clear roadmap for integrating secure blockchain technology while addressing potential challenges.

Choose the Right Blockchain Platform

  • Research and select a blockchain platform that aligns with your business requirements and security needs.
  • Consider scalability, interoperability, and compliance with industry standards.

Ensure Data Compatibility and Interoperability

  • Ensure your existing data infrastructure is compatible with blockchain technology.
  • Implement protocols and standards to facilitate seamless data exchange and system interoperability.

Pilot Test and Iterate

  • Start with small-scale projects to test the effectiveness of blockchain integration.
  • Gather feedback from different stakeholders and end-users to identify areas for improvement and iterate accordingly.

Train Your Team

  • Provide comprehensive training to your marketing team on blockchain best practices and fundamentals.
  • Empower them to understand the importance of data security and their role in maintaining it through blockchain technology.

Monitor and Evaluate Performance

  • Establish key performance indicators (KPIs) to measure the impact of blockchain integration on data security.
  • Continuously monitor and evaluate performance metrics to ensure that objectives are met.

Conclusion

Blockchain technology holds incredible promise for fortifying data security in marketing endeavors. The decentralized nature and powerful encryption techniques shield it against cyber threats. It makes sure that sensitive information remains safe and sound. As businesses strive to protect their marketing data, integrating blockchain for data security becomes more than just a smart move—it’s a strategic imperative.

Companies can team up with seasoned professionals in blockchain development services. This helps them tap into the full potential of this game-changing technology. Looking ahead, blockchain’s widespread adoption is set to revolutionize data security standards in marketing, paving the way for innovation and instilling confidence among all stakeholders.

Read AI expands its AI-powered summaries from meetings to messages and emails

Read AI expands its AI-powered summaries from meetings to messages and emails Kyle Wiggers 10 hours

Meetings are time-consuming, and there’s no way around it. According to a 2022 poll from Deputy.com, many U.S. workers spend up to around eight hours in meetings every week, depending on the industry and locale.

The productivity hit explains the growing popularity of AI-powered summarization tools. In a recent survey of marketers by The Conference Board, a nonprofit think tank, nearly half of respondents said they were using AI to summarize the content of emails, conference calls and more.

While a number of videoconferencing suites now offer built-in summarization features, David Shim believes that there’s room for third-party solutions. And he would: He’s the co-founder of Read AI, which summarizes video calls across platforms such as Zoom, Microsoft Teams and Google Meet.

Shim, previously the CEO of Foursquare, co-founded Read AI with Rob Williams and Elliott Waldron in 2021. Prior to Read AI, the trio worked together at Foursquare, Snapchat and Shim’s previous startup, Placed (which Foursquare acquired in 2019).

“Read AI’s direct competition is traditional project management, where notes are manually written,” Shim told TechCrunch. “By learning what’s important to you cross-platform, Read isn’t a co-pilot — rather, it’s an autopilot delivering content that makes your work more effective and efficient.”

At the start, Read focused exclusively on video meetings solutions, offering dashboards to measure how well a meeting’s going (as judged by certain metrics, at least) and two-minute summaries of hourlong meetings. But, coinciding with a recently closed $21 million funding round led by Goodwater Capital with Madrona Venture Group, the company is expanding into message and email summarization.

Available in “soft launch,” Read’s new capability connects to Gmail, Outlook and Slack as well as videoconferencing platforms to learn topics that might be relevant to you. Within 24 hours of connecting to the messaging and videoconferencing services you use, Read begins delivering daily updates with summaries, AI-generated “takeaways,” an overview of key content and updates to conversation topics in chronological order. Read charges a $15 to $30 monthly fee for its service.

“What makes Read unique is that its AI agents work quietly in the background, enabling your meetings, emails and messages to interact with each other,” Shim said, adding that the average summary from Read AI condenses 50 emails across 10 recipients into a single summary. “This connected intelligence unifies your communications and empowers you and your team with personalized, actionable briefings tailored to your needs and priorities.”

Now, color me skeptical, but I’m not sure I trust any AI-driven tool to summarize content consistently accurately.

Read AI

Read’s platform taps generative AI to summarize meetings, messages and emails. Image Credits: Read

Models like ChatGPT and Microsoft’s Copilot make mistakes when summarizing because of their tendency to hallucinate, including in summaries of meetings. In a recent piece, The Wall Street Journal cited an instance where, for one early adopter using Copilot for meetings, Copilot invented attendees and implied that calls were about subjects that were never actually discussed.

Is Read AI’s tool any different? Shim claims that it’s more robust than many of the solutions out there, including rivals like Supernormal and Otter.

“Read runs a proprietary methodology to coordinate raw content with language model outputs, so that deviations are automatically detected and appropriately steered,” he said. “Additionally, we can use content from meetings to better contextualize email and messaging content, further reducing uncertainty and improving results.”

Take that statement with a grain of salt. Shim didn’t share benchmark results to support those assertions.

In lieu of benchmarks, Shim emphasized the productivity boost summarization tools such as Read can (in theory) deliver.

“Rather than rescheduling a meeting as you’re running late or double-booked, Read can attend in your place and deliver to you a summary and action items that even the best executive assistant couldn’t match,” he said, stressing also that Read doesn’t use customer data to train its AI models and that users have “full control” over content passing through the platform. “AI is bringing focus back to knowledge workers [by] saving them hours a day.”

Read AI is no stranger to controversy, so it’s a little hard to take Shim at his word. The platform’s sentiment analysis tool, which interprets meeting participants’ vocal and facial cues to inform hosts on their sentiment, has been called out by privacy advocates for being overly invasive, prone to bias and very possibly a data security risk.

Gender and racial biases are a well–documented phenomenon in sentiment analysis algorithms.

Emotional analysis models tend to assign more negative emotions to Black people’s faces than white people’s, and perceive the language that some Black people use as aggressive or toxic. AI video hiring platforms have been found to respond differently to the same job candidate wearing different outfits, such as glasses and headscarves. And in a 2020 study from MIT, researchers showed that algorithms could become biased toward certain facial expressions, like smiling, which could reduce their accuracy.

Read AI

Image Credits: Read

Perhaps tellingly, Shim continues to see Read’s sentiment analysis technology as a competitive advantage, not a risk, while pointing out that customers can disable the feature and that analysis data is deleted from Read’s servers periodically. “Using a multimodal model allows Read to incorporate non-verbal responses into meeting summaries,” he said. “As an example, during a pitch meeting, a startup might talk about the benefits of the product, but the participants visually shake their heads and frown during the pitch … Read creates a custom baseline of engagement and sentiment for each meeting participant, rather than applying a one-size fits all model, ensuring that each person is treated as a unique person.”

Accurate or no, with a $32 million war chest and a customer base that grew by half a million users over the past quarter, Read clearly has some folks convinced that it can deliver on its promises.

Read, based in Seattle, Washington, plans to double its staff to over 40 employees by the end of the year leveraging the new infusion of capital, Shim said.

“In face of a broader slowdown over the last few years, Read has continued to see the growth curve steepen across users, meetings and revenue,” he added. “This acceleration in growth can directly be attributed to the quantifiable return users see in terms of time savings when using Read AI in their meetings.”

Hailo Revolutionizes Edge AI with Launch of Powerful Hailo-10 Accelerator and Secures $120 Million Funding

In a significant advancement for edge computing, Hailo, a trailblazer in edge AI processors, has successfully completed a new funding milestone of $120 million, propelling its total investment to over $340 million. This financial boost coincides with the debut of its groundbreaking Hailo-10 GenAI accelerator, marking a pivotal shift towards embedding generative AI directly into edge devices across the personal computer and automotive industries.

The new funding round was led by current and new investors including the Zisapel family, Gil Agmon, Delek Motors, Alfred Akirov, DCLBA, Vasuki, OurCrowd, Talcar, Comasco, Automotive Equipment (AEV), and Poalim Equity.

Hailo CEO and Co-Founder Orr Danon stated, “The closing of our new funding round enables us to leverage all the exciting opportunities in our pipeline, while setting the stage for our long-term future growth. Together with the introduction of our Hailo-10 GenAI accelerator, it strategically positions us to bring classic and generative AI to edge devices in ways that will significantly expand the reach and impact of this remarkable new technology. We designed Hailo-10 to seamlessly integrate GenAI capabilities into users’ daily lives, freeing users from cloud network constraints. This empowers them to utilize chatbots, copilots, and other emerging content generation tools with unparalleled flexibility and immediacy, enhancing productivity and enriching lives.”

Founded in 2017 in Israel, Hailo has rapidly ascended to a prominent position in the AI chip industry, serving over 300 customers globally. With offices spanning the United States to Asia, Hailo's innovative processors deliver data center-level AI performance on edge devices. These processors, redefining conventional computer architecture, facilitate real-time deep learning tasks with unmatched efficiency.

Hailo-10 Accelerator

The introduction of the Hailo-10 accelerator is set to redefine the user experience by enabling the execution of GenAI applications locally, without the dependency on cloud-based services. Orr Danon, Hailo's CEO and Co-Founder, emphasizes the transformative potential of Hailo-10, highlighting its ability to integrate GenAI capabilities seamlessly into daily life. The Hailo-10 not only promises enhanced productivity and enriched user experiences but also addresses privacy concerns by processing data locally and reducing reliance on cloud data centers.

Hailo-10 stands out for its performance-to-cost and performance-to-power consumption ratios, supporting a wide range of applications while ensuring sustainability. Its compatibility with Hailo's comprehensive software suite, shared with the Hailo-8 and Hailo-15 processors, facilitates seamless integration across various devices and platforms.

The Hailo-10 sets a new benchmark for edge AI accelerators with its capability to perform up to 40 tera operations per second (TOPS), outpacing integrated neural processing units (NPUs) in both speed and energy efficiency. Recent benchmarks reveal that it offers double the performance while consuming only half the power compared to Intel's Core Ultra NPU.

Orr Danon further added, “Whether users employ GenAI to automate real-time translation or summarization services, generate software code, or images and videos from text prompts, Hailo-10 lets them do it directly on their PCs or other edge systems, without straining the CPU or draining the battery.”

Notably, Hailo-10 showcases remarkable efficiency, capable of running complex models like Llama2-7B and Stable Diffusion 2.1 within an ultra-low power envelope, setting new benchmarks in the field. The accelerator's early applications are poised to revolutionize PCs and automotive infotainment systems by powering sophisticated AI features such as chatbots and personal assistants, heralding a new era of intelligent edge computing.

With sample shipments of the Hailo-10 GenAI accelerator anticipated in Q2 of 2024, Hailo is at the forefront of ushering generative AI into the mainstream, transforming how users interact with technology on a fundamental level.

Ways LCD data science undermines more thoughtful approaches to AI

Ways LCD data science undermines more thoughtful approaches to AI

Image by Steve Buissinne on Pixabay

Lowest common denominator (LCD) data science is the unthinking variety of data science that doesn’t question the prevailing wisdom or try to counter it. The unfortunate reality is that LCD data science is much more common and triggers much more damaging side effects than the alternatives.

Consider some symptoms of a society suffering from the current dominance of LCD data science:

The chatbot wow factor and a willingness to be deluded by gen AI’s allure

At this year’s South by SouthWest conference, Microsoft’s VP of AI and Design John Maeda observed that chatbots have been fooling humans since the 1960s. Conversation is often cryptic, leading humans to fill in the gaps with assumptions that aren’t reflective of what the AI’s actually doing or why. As a result, bots can seem smarter than they really are.

Maeda said chatbots for decades have been adept at extending conversations merely by picking up keywords from a human’s conversation and throwing the keywords back at them in the form of questions phrased in ways that imply the bot is genuinely curious.

It’s not difficult for bots to borrow the therapist’s approach to getting a patient to talk about their problems. For example, the bot hears the human mention “mother”. The question in response becomes, “Tell me about your mother.”

Lately, even some trained scientists who’ve been wowed at the recent question answering success of generative AI have been asserting that bots seem to be “sentient” these days. Skeptics, meanwhile, counter that bots are really just doing an elaborate form of autocomplete-style guesswork, and that they’re still hallucinating quite a bit.

Just because chatbots provide useful answers to questions doesn’t prove they understand what the answers they’re delivering mean….or how they relate to the nuances behind the question.

How AI-enabled automation can lower overall business performance

In January 2024, the International Monetary Fund (IMF) released a Staff Discussion Note entitled “Gen-AI: Artificial Intelligence and the Future of Work.” One of the observations the authors offered was this one:

In advanced economies, about 60 percent of jobs are exposed to AI…. Of these, about half may be negatively affected by AI, while the rest could benefit from enhanced productivity through AI integration.

One way to read this sort of assertion with a critical eye is to think about current automation-driven practices and how the quality of those processes has further declined now that AI-enabled software is the norm.

Take the typical HR department’s worst hiring tendencies and how they’re magnified by AI. In a time when popular business books like David Epstein’s Range: Why Generalists Triumph in a Specialized World have proclaimed the value of generalists, the vast majority of job postings online are designed to filter on a laundry list of a dozen or more specialties. The generalists may well be valuable, but what’s the likelihood their application will make it to the hiring manager for consideration?

Much more likely is the prospect that applications from abstract thinking generalists will be filtered with the help of AI out of consideration, precisely because these generalists may not have X years of experience in the Y specialization using the Z software package. More thoughtful AI, by contrast, would steer clear of reducing hiring to a mere resume-to-requirements text matching exercise.

Repeating the lie that a hard problem is solved doesn’t make it so

Timnit Gebru, Founder & Executive Director at The Distributed AI Research Institute, recently shared a video clip from a 1984 episode of a Silicon Valley PBS affiliate’s TV program The Computer Chronicles as an example of the kind of AI hype that’s been around for forty years or more. During the program, one of the consultants interviewed proudly announced,“We’ve reached a watershed, where it’s no longer very expensive or very difficult for individuals with no technical background to build [AI] systems and apply them usefully.”

The truth is that systems thinking is hard, and that most companies fail to support a forward-looking architectural vision. Systems thinking should be a salaried discipline in its own right, one that needs generalists who can abstract, synthesize and clear a path via data-centric architecture for the process improvements that analytics results demand. Business leaders need to fund and nurture 20 different roles, several of which involve architects at different levels, to tackle AI, not just four, and those roles need to represent a full range of intellectual diversity, thinkers with many different styles.

Improving AI implies the need for a radically different approach to data + knowledge management

Many of the skills these roles demand already exist inside the largest enterprises. The problem is that the people with these skills are siloed in dedicated data management, content management and knowledge management departments.

The people from these three departments could instead band together with a single, unified approach to structured and unstructured data management that’s feasible now with a knowledge graph-based data architecture. Leadership needs to de-silo their organizations and empower visionary architects to implement such a unified approach. To fund such an effort, leaders can reallocate budgets from underutilized, siloed application suites.

Women in AI: Kristine Gloria of the Aspen Institute tells women to enter the field and ‘follow your curiosity’

Women in AI: Kristine Gloria of the Aspen Institute tells women to enter the field and ‘follow your curiosity’ Kyle Wiggers 8 hours

To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution. We’ll publish several pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.

Kristine Gloria leads the Aspen Institute’s Emergent and Intelligent Technologies Initiative — the Aspen Institute being the Washington, D.C.-headquartered think tank focused on values-based leadership and policy expertise. Gloria holds a Ph.D. in cognitive science and a Master’s in media studies, and her past work includes research at MIT’s Internet Policy Research Initiative, the San Francisco-based Startup Policy Lab, and the Center for Society, Technology and Policy at UC Berkeley.

Q&A

Briefly, how did you get your start in AI? What attracted you to the field?

To be frank, I definitely didn’t start my career in pursuit of being in AI. First, I was really interested in understanding the intersection of technology and public policy. At the time, I was working on my Master’s in media studies, exploring ideas around remix culture and intellectual property. I was living and working in D.C. as an Archer Fellow for the New America Foundation. One day, I distinctly remember sitting in a room filled with public policymakers and politicians who were throwing around terms that didn’t quite fit their actual technical definitions. It was shortly after this meeting that I realized that in order to move the needle on public policy, I needed the credentials. I went back to school, earning my doctorate in cognitive science with a concentration on semantic technologies and online consumer privacy. I was very fortunate to have found a mentor and advisor and lab that encouraged a cross-disciplinary understanding of how technology is designed and built. So, I sharpened my technical skills alongside developing a more critical viewpoint on the many ways tech intersects our lives. In my role as the director of AI at the Aspen Institute, I then had the privilege to ideate, engage and collaborate with some of the leading thinkers in AI. And I always found myself gravitating towards those who took the time to deeply question if and how AI would impact our day-to-day lives.

Over the years, I’ve led various AI initiatives and one of the most meaningful is just getting started. Now, as a founding team member and director of strategic partnerships and innovation at a new nonprofit, Young Futures, I’m excited to weave in this type of thinking to achieve our mission of making the digital world an easier place to grow up. Specifically, as generative AI becomes table stakes and as new technologies come online, it’s both urgent and critical that we help preteens, teens and their support units navigate this vast digital wilderness together.

What work are you most proud of (in the AI field)?

I’m most proud of two initiatives. First is my work related to surfacing the tensions, pitfalls and effects of AI on marginalized communities. Published in 2021, “Power and Progress in Algorithmic Bias” articulates months of stakeholder engagement and research around this issue. In the report, we posit one of my all-time favorite questions: “How can we (data and algorithmic operators) recast our own models to forecast for a different future, one that centers around the needs of the most vulnerable?” Safiya Noble is the original author of that question, and it’s a constant consideration throughout my work. The second most important initiative recently came from my time as Head of Data at Blue Fever, a company on the mission to improve youth well-being in a judgment-free and inclusive online space. Specifically, I led the design and development of Blue, the first AI emotional support companion. I learned a lot in this process. Most saliently, I gained a profound new appreciation for the impact a virtual companion can have on someone who’s struggling or who may not have the support systems in place. Blue was designed and built to bring its “big-sibling energy” to help guide users to reflect on their mental and emotional needs.

How do you navigate the challenges of the male-dominated tech industry, and, by extension, the male-dominated AI industry?

Unfortunately, the challenges are real and still very current. I’ve experienced my fair share of disbelief in my skills and experience among all types of colleagues in the space. But, for every single one of those negative challenges, I can point to an example of a male colleague being my fiercest cheerleader. It’s a tough environment, and I hold on to these examples to help manage. I also think that so much has changed in this space even in the last five years. The necessary skill sets and professional experiences that qualify as part of “AI” are not strictly computer science-focused anymore.

What advice would you give to women seeking to enter the AI field?

Enter in and follow your curiosity. This space is in constant motion, and the most interesting (and likely most productive) pursuit is to continuously be critically optimistic about the field itself.

What are some of the most pressing issues facing AI as it evolves?

I actually think some of the most pressing issues facing AI are the same issues we’ve not quite gotten right since the web was first introduced. These are issues around agency, autonomy, privacy, fairness, equity and so on. These are core to how we situate ourselves amongst the machines. Yes, AI can make it vastly more complicated — but so can socio-political shifts.

What are some issues AI users should be aware of?

AI users should be aware of how these systems complicate or enhance their own agency and autonomy. In addition, as the discourse around how technology, and particularly AI, may impact our wellbeing, it’s important to remember there are tried-and-true tools to manage more negative outcomes.

What is the best way to responsibly build AI?

A responsible build of AI is more than just the code. A truly responsible build takes into account the design, governance, policies and business model. All drive the other, and we will continue to fall short if we only strive to address one part of the build.

How can investors better push for responsible AI

One specific task, which I admire Mozilla Ventures for requiring in its diligence, is an AI model card. Developed by Timnit Gebru and others, this practice of creating model cards enables teams — like funders — to evaluate the risks and safety issues of AI models used in a system. Also, related to the above, investors should holistically evaluate the system in its capacity and ability to be built responsibly. For example, if you have trust and safety features in the build or a model card published, but your revenue model exploits vulnerable population data, then there’s misalignment to your intent as an investor. I do think you can build responsibly and still be profitable. Lastly, I would love to see more collaborative funding opportunities among investors. In the realm of wellbeing and mental health, the solutions will be varied and vast as no person is the same and no one solution can solve for all. Collective action among investors who are interested in solving the problem would be a welcome addition.

Anand Mahindra Praises Swaayatt Robots’ Level 5 Autonomy Efforts

Anand Mahindra, Chairman of Mahindra and Mahindra, praised the Indian autonomous driving company Swaayatt Robots for trying to achieve Level 5 autonomy. “Evidence of tech innovation rising across India. An engineer who’s not just building yet another delivery app,” he posted on X, attaching a Swaayatt Robots demo video.

“Sanjeev is using complex math to target level 5 autonomy. I’m cheering loudly. And certainly won’t debate his choice of car!, he added.

Evidence of tech innovation rising across India.
An engineer who’s not building yet another delivery app. @sanjeevs_iitr is using complex math to target level 5 autonomy.
I’m cheering loudly. 👏🏽👏🏽👏🏽
And certainly won’t debate his choice of car! pic.twitter.com/luyJXAkQap

— anand mahindra (@anandmahindra) April 2, 2024

Last year, Swaayatt Robots announced that it achieved the world’s first Level 5 autonomous driving capability. In the demonstration, its autonomous vehicle i.e, Mahindra Bolero, learned to negotiate complex traffic dynamics in the Toll-Plaza and successfully crossed highly unstructured toll-gates.

Interestingly, Swaayatt Robots uses the Mahindra Bolero for its demonstrations. When AIM got in touch with Swaayatt’s chief, Sanjeev Sharma, he said that he uses the Bolero because when he started the company, it was the only vehicle he owned and it was cheaper.

“Mahindra Bolero is the easiest of the vehicles and one of the most inexpensive vehicles. If we really want to just install an electromechanical system, then it is one of the most inexpensive vehicles available,” said Sharma.

Recently, Swaayatt Robots posted a demonstration where its autonomous vehicle drives through traffic and environmental scenarios. The autonomous vehicle starts from a generic open environment at the temple, where there are no traffic rules to abide by. It then exits the region and assumes a generic autonomous navigation behavior, negotiating complex traffic scenes.

The post Anand Mahindra Praises Swaayatt Robots’ Level 5 Autonomy Efforts appeared first on Analytics India Magazine.

5 Common Python Gotchas (And How To Avoid Them)

5 Common Python Gotchas (And How To Avoid Them)
Image by Author

Python is a beginner-friendly and versatile programming language known for its simplicity and readability. Its elegant syntax, however, is not immune to quirks that can surprise even experienced Python developers. And understanding these is essential for writing bug-free code—or pain-free debugging if you will.

This tutorial explores some of these gotchas: mutable defaults, variable scope in loops and comprehensions, tuple assignment, and more. We’ll code simple examples to see why things work the way they do, and also look at how we can avoid these (if we actually can 🙂).

So let’s get started!

1. Mutable Defaults

In Python, mutable defaults are common sharp corners. You’ll run into unexpected behavior anytime you define a function with mutable objects, like lists or dictionaries, as default arguments.

The default value is evaluated only once, when the function is defined, and not each time the function is called. This can lead to unexpected behavior if you mutate the default argument within the function.

Let's take an example:

def add_to_cart(item, cart=[]):      cart.append(item)      return cart

In this example, add_to_cart is a function that takes an item and appends it to a list cart. The default value of cart is an empty list. Meaning calling the function without an item to add returns an empty cart.

And here are a couple of function calls:

# User 1 adds items to their cart  user1_cart = add_to_cart("Apple")  print("User 1 Cart:", user1_cart)  
Output >>> ['Apple']

This works as expected. But what happens now?

# User 2 adds items to their cart  user2_cart = add_to_cart("Cookies")  print("User 2 Cart:", user2_cart) 
Output >>>    ['Apple', 'Cookies'] # User 2 never added apples to their cart!

Because the default argument is a list—a mutable object—it retains its state between function calls. So each time you call add_to_cart, it appends the value to the same list object created during the function definition. In this example, it’s like all users sharing the same cart.

How To Avoid

As a workaround, you can set cart to None and initialize the cart inside the function like so:

def add_to_cart(item, cart=None):      if cart is None:          cart = []      cart.append(item)      return cart

So each user now has a separate cart. 🙂

If you need a refresher on Python functions and function arguments, read Python Function Arguments: A Definitive Guide.

2. Variable Scope in Loops and Comprehensions

Python's scope oddities call for a tutorial of their own. But we’ll look at one such oddity here.

Look at the following snippet:

x = 10  squares = []  for x in range(5):      squares.append(x ** 2)    print("Squares list:", squares)      # x is accessible here and is the last value of the looping var  print("x after for loop:", x)

The variable x is set to 10. But x is also the looping variable. But we’d assume that the looping variable’s scope is limited to the for loop block, yes?

Let’s look at the output:

Output >>>    Squares list: [0, 1, 4, 9, 16]  x after for loop: 4

We see that x is now 4, the final value it takes in the loop, and not the initial value of 10 we set it to.

Now let’s see what happens if we replace the for loop with a comprehension expression:

x = 10  squares = [x ** 2 for x in range(5)]    print("Squares list:", squares)      # x is 10 here  print("x after list comprehension:", x)

Here, x is 10, the value we set it to before the comprehension expression:

Output >>>    Squares list: [0, 1, 4, 9, 16]  x after list comprehension: 10

How To Avoid

To avoid unexpected behavior: If you’re using loops, ensure that you don’t name the looping variable the same as another variable you’d want to access later.

3. Integer Identity Quirk

In Python, we use the is keyword for checking object identity. Meaning it checks whether two variables reference the same object in memory. And to check for equality, we use the == operator. Yes?

Now, start a Python REPL and run the following code:

>>> a = 7  >>> b = 7  >>> a == 7  True  >>> a is b  True

Now run this:

>>> x = 280  >>> y = 280  >>> x == y  True  >>> x is y  False

Wait, why does this happen? Well, this is due to "integer caching" or "interning" in CPython, the standard implementation of Python.

CPython caches integer objects in the range of -5 to 256. Meaning every time you use an integer within this range, Python will use the same object in memory. Therefore, when you compare two integers within this range using the is keyword, the result is True because they refer to the same object in memory.

That’s why a is b returns True. You can also verify this by printing out id(a) and id(b).

However, integers outside this range are not cached. And each occurrence of such integers creates a new object in memory.

So when you compare two integers outside the cached range using the is keyword (yes, x and y both set to 280 in our example), the result is False because they are indeed two different objects in memory.

How To Avoid

This behavior shouldn’t be a problem unless you try to use the is for comparing equality of two objects. So always use the == operator to check if any two Python objects have the same value.

4. Tuple Assignment and Mutable Objects

If you’re familiar with built-in data structures in Python, you know that tuples are immutable. So you cannot modify them in place. Data structures like lists and dictionaries, on the other hand, are mutable. Meaning you can change them in place.

But what about tuples that contain one or more mutable objects?

It's helpful to started a Python REPL and run this simple example:

>>> my_tuple = ([1,2],3,4)  >>> my_tuple[0].append(3)  >>> my_tuple  ([1, 2, 3], 3, 4)

Here, the first element of the tuple is a list with two elements. We try appending 3 to the first list and it works fine! Well, did we just modify a tuple in place?

Now let’s try to add two more elements to the list, but this time using the += operator:

>>> my_tuple[0] += [4,5]  Traceback (most recent call last):    File "", line 1, in   TypeError: 'tuple' object does not support item assignment

Yes, you get a TypeError which says the tuple object does not support item assignment. Which is expected. But let’s check the tuple:

>>> my_tuple  ([1, 2, 3, 4, 5], 3, 4)

We see that elements 4 and 5 have been added to the list! Did the program just throw an error and succeed at the same time?

Well the += operator internally works by calling the __iadd__() method which performs in-place addition and modifies the list in place. The assignment raises a TypeError exception, but the addition of elements to the end of the list has already succeeded. += is perhaps the sharpest corner!

How To Avoid

To avoid such quirks in your program, try using tuples only for immutable collections. And avoid using mutable objects as tuple elements as much as possible.

5. Shallow Copies of Mutable Objects

Mutability has been a recurring topic in our discussion thus far. So here’s another one to wrap up this tutorial.

Sometimes you may need to create independent copies of lists. But what happens when you create a copy using a syntax similar to list2 = list1 where list1 is the original list?

It’s a shallow copy that gets created. So it only copies the references to the original elements of the list. Modifying elements through the shallow copy will affect both the original list and the shallow copy.

Let's take this example:

original_list = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]    # Shallow copy of the original list  shallow_copy = original_list    # Modify the shallow copy  shallow_copy[0][0] = 100    # Print both the lists  print("Original List:", original_list)  print("Shallow Copy:", shallow_copy)

We see that the changes to the shallow copy also affect the original list:

Output >>>    Original List: [[100, 2, 3], [4, 5, 6], [7, 8, 9]]  Shallow Copy: [[100, 2, 3], [4, 5, 6], [7, 8, 9]]

Here, we modify the first element of the first nested list in the shallow copy: shallow_copy[0][0] = 100. But we see that the modification affects both the original list and the shallow copy.

How To Avoid

To avoid this, you can create a deep copy like so:

import copy    original_list = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]    # Deep copy of the original list  deep_copy = copy.deepcopy(original_list)    # Modify an element of the deep copy  deep_copy[0][0] = 100    # Print both lists  print("Original List:", original_list)  print("Deep Copy:", deep_copy)

Now, any modification to the deep copy leaves the original list unchanged.

Output >>>    Original List: [[1, 2, 3], [4, 5, 6], [7, 8, 9]]  Deep Copy: [[100, 2, 3], [4, 5, 6], [7, 8, 9]]

Wrapping Up

And that’s a wrap! In this tutorial, we've explored several oddities in Python: from the surprising behavior of mutable defaults to the subtleties of shallow copying lists. This is only an introduction to Python’s oddities and is by no means an exhaustive list. You can find all the code examples on GitHub.

As you keep coding for longer in Python—and understand the language better—you’ll perhaps run into many more of these. So, keep coding, keep exploring!

Oh, and let us know in the comments if you’d like to read a sequel to this tutorial.

Bala Priya C is a developer and technical writer from India. She likes working at the intersection of math, programming, data science, and content creation. Her areas of interest and expertise include DevOps, data science, and natural language processing. She enjoys reading, writing, coding, and coffee! Currently, she's working on learning and sharing her knowledge with the developer community by authoring tutorials, how-to guides, opinion pieces, and more. Bala also creates engaging resource overviews and coding tutorials.

More On This Topic

  • 3 Crucial Challenges in Conversational AI Development and How to Avoid Them
  • 10 Most Common Data Quality Issues and How to Fix Them
  • KDnuggets News, August 24: Implementing DBSCAN in Python • How to…
  • Most Common Data Science Interview Questions and Answers
  • Common Data Problems (and Solutions)
  • Data Science Programming Languages and When To Use Them

Reliance’s Tamil Nadu Campus may Mark Meta’s first India Data Centre Entry 

mukesh ambani reliance industries

Mark Zuckerberg’s Meta is likely to house its first data centre in India at the Reliance Industries campus in Chennai, as reported by The Economic Times. This will help the social media giant process user generated content locally for Facebook, Instagram, and WhatsApp.

Currently, Meta’s Singapore data centre services Indian user data. A local data centre in India, Meta’s largest market, will allow for faster data processing and, in addition to content, local advertisements will also improve user experience and reduce transmission costs from global data hubs.

A three-way joint venture between Brookfield Asset Management, Reliance Industries and Digital Realty, the 10-acre campus (MAA10) in Chennai’s Ambattur Industrial Estate can cater up to 100-Megawatt (MW) IT load capacity.

The number of Meta users in India are nearly twice that of those in the US and Meta Platforms reported that its advertisement revenue from click-to-message ads in India, across its flagship platforms has doubled during the September 2023 quarter. However, Neil Shah, partner at technology research firm Counterpoint Research, said to ET that the Indian market is still underpenetrated if you consider the nearly 850 million installed smartphone user base.

He called localising user-generated content and ads a prudent strategy as it will be beneficial in reducing latency, enhancing AI-driven recommendations, and saving transmission costs.

As per a study by CareEdge Ratings, a credit rating and analytics company, India’s data centre capacity is expected to double in three years from 0.9 GW in 2023 to around 2 GW in 2026.

Recently, several giants, including AdaniConnex, Reliance, Sify, Atlassian, Yotta, AWS, and Lenovo, have announced substantial investments in data centres across India. And, despite the concentration of data centres in tier 1 cities, the need for edge computing, the desire to be closer to customers and offer a faster response time and lower latency for time-sensitive applications, is fueling the expansion of data centres to tier 2, 3 and 4 cities.

This proximity is crucial for the emerging needs driven by 5G, OTT streaming, online gaming, and AI technologies and its benefits are many.

Tier 2 and Tier 3 cities represent untapped markets with considerable growth potential; the availability of more space and less stringent regulations can facilitate the adoption of sustainable practices, including using renewable energy sources and advanced cooling technologies.

All this can contribute to more balanced regional development, create job opportunities, and support government initiatives to digitise the nation’s economy.

The post Reliance’s Tamil Nadu Campus may Mark Meta’s first India Data Centre Entry appeared first on Analytics India Magazine.