Genpact Integrates Gen AI into Enterprise 360Intelligence Platform

Genpact announced that it has incorporated generative AI capabilities into its proprietary Enterprise360 intelligence platform leveraged by clients worldwide.

Enterprise360 is Genpact’s cutting-edge platform for managing and transforming client operations. The platform brings together digital tools, standardised processes, and now the power of generative AI to help businesses identify and address operational issues and optimise performance.

With access to replicable solutions and a wealth of data-driven insights, teams can expedite value delivery and drive transformative change, while leveraging actionable feedback and fostering a culture of continuous improvement.

Enterprise360 is used across Genpact’s global talent serving over 800 clients globally. Enterprise360 helps businesses digitise critical operating rituals and provides visibility into almost 500 metrics and more than 250 benchmarks.

“The integration of generative AI into Enterprise360 is a significant milestone for Genpact. This new capability will help our clients drive even greater performance and transformation, and we are excited to see the impact it will have on their businesses.

“From design to operations, transformation to renewals, Genpact’s Enterprise360 offers comprehensive solutions that address the complete lifecycle of client requirements,” Vidya Rao, Chief Information Officer, Genpact, said.

The post Genpact Integrates Gen AI into Enterprise 360Intelligence Platform appeared first on Analytics India Magazine.

ChatGPT is getting a slew of updates this week. Here’s what you need to know

ChatGPT logo on a phone resting on a keyboard

Out of the many AI chatbots, Bing Chat has been leading with updates as new ones are released on almost a weekly basis. However, ChatGPT may be closing the gap with many highly anticipated upgrades on the horizon.

Logan Kilpatrick, who works in developer relations at OpenAI, shared on X, previously Twitter, a list of new ChatGPT features slated to be released this week.

Also: Do you need a speech therapist? Now you can consult AI

Per the X post, these features include example prompts, suggested replies, GPT-4 by default, uploading multiple files into Code Interpreter for beta users, staying logged in, and keyboard shortcuts.

Many of these features have already been available on Bing Chat, including suggested replies, GPT-4 by default, and staying logged in, and they have been highly anticipated on ChatGPT.

These features will all make using ChatGPT a more seamless experience, from logging in and figuring out the perfect prompt all the way through the improved output quality by leveraging GPT-4.

Also: How will AI impact your industry? Pew Research has answers

For example, instead of staring at a blank screen and not knowing what to ask ChatGPT to do, you can use the example prompts. Then, you can follow up with a series of suggested replies that will automatically populate to make your information-seeking experience easier.

The Decoder shared a screenshot of the suggested answers feature, reporting that it is already available in ChatGPT. However, I could not access this feature, and per Kilpatrick's post, it seems to be coming soon.

Nonetheless, the photo helps visualize what the feature will look like.

Not having to sign in every time will also be a time saver and productivity enhancer, as remembering which account you used to log in with and then recalling your credentials can be time-consuming.

Artificial Intelligence

Flower lands $3.6M to grow its platform for federated learning

Flower lands $3.6M to grow its platform for federated learning Kyle Wiggers 8 hours

The reliance on public data — mostly web data — to train AI is holding back the AI field. That’s according to Daniel Beutel, a tech entrepreneur and researcher at the University of Cambridge, who co-founded a startup, Flower, to solve what he sees as a growing problem in AI research.

“Public, centralized data is only a tiny fraction of all the data in the world,” Beutel told TechCrunch in an email interview. “In contrast, distributed data — the data that’s trapped on devices like phones, wearables and internet of things devices or in organizational silos, such as business units within an enterprises — is much larger and more comprehensive, but out of reach for AI today.”

Flower, which Beutel co-started in 2020 with Cambridge colleagues Taner Topal and Nicholas Lane, the ex-head of Samsung’s AI Center in Cambridge, is an attempt to “decentralize” the AI training process through a platform that allows developers to train models on data spread across thousands of devices and locations. Relying on a technique called federated learning, Flower doesn’t provide direct access to data, making it ostensibly “safer” to train on in situations where privacy or compliance are concerns.

“Flower believes that, once made easy and accessible because of the fundamental advantages of distributed data, this approach to AI will not only become mainstream, but also the norm for how AI training is performed,” Beutel said.

Federated learning isn’t a new approach. First proposed in academia years ago, the technique entails training AI algorithms across decentralized devices holding data samples without exchanging those samples. A centralized server might be used to orchestrate the algorithm’s training, or the orchestration might happen on a peer-to-peer basis. But in any case, local algorithms are trained on local data samples, and the weights — the algorithms’ learnable components — are exchanged between them to generate a global model.

Flower

Flower’s platform leverages federated learning to offer a decentralized alternative for AI model training.

Startups like DynamoFL, DataFleets and Sherpa are employing federated learning in some form to train AI models, as are Big Tech companies like Google.

“With Flower, the data never needs to leave the source device or location (e.g., a company facility) during training,” Beutel explains. “Instead, ‘compute goes to the data,’ and partial training is performed at each location where the data resides — with only training results and not the data eventually being transmitted and merged with the results of all other locations.”

Flower recently launched FedGPT, a federated approach to training large language models (LLMs) comparable to OpenAI’s ChatGPT and GPT-4. Currently in preview, FedGPT lets companies train LLMs on data spread around the world and on different devices, including data centers and workstations.

“FedGPT is important because it allows organizations to build LLMs using internal, sensitive data without sharing them with an LLM provider,” Beutel said. “Companies also often have data spread around the world, or in different parts of the organization, that are unable to move or leave a geographic region. FedGPT lets all of this data be leveraged when training an LLM while still respecting concerns over privacy and data leakage, and laws restricting data movement.”

Flower is also partnering with Brave, the open source web browser, to spearhead a project called Dandelion. The goal is to build an open source, federated learning system spanning the over 50 million Brave browser clients in use today, Beutel says.

“AI is entering a time of increasing regulation and special care over the provenance of the data it uses,” Beutel said. “Customers can build AI systems using Flower where user privacy is strongly protected, and yet they are still able to leverage more data than they ever could before … Under Flower, due to federated learning principles, an AI system can still successfully be deployed and trained under different constraints.”

Flower’s seen impressive uptake over the past several months, with its community of developers growing to just over 2,300, according to Beutel. He claims that “dozens” of Fortune 500 companies and academic institutions are Flower users, including Porsche, Bosch, Samsung, Banking Circle, Nokia, Stanford, Oxford, MIT and Harvard.

Buoyed by those metrics, Flower — a member of one of Y Combinator’s 2023 cohorts — has attracted investors like First Spark Ventures, Hugging Face CEO Clem Delangue, Factorial Capital, Betaworks, and Pioneer Fund. In its pre-seed round, the startup raised $3.6 million.

Beutel says that the round will be put toward expanding Flower’s core team, growing its team of researchers and developers and accelerating the development of the open source software that powers Flower’s framework and ecosystem.

“AI is facing a crisis of reproducibility, and this is even more acute for federated learning,” Beutel said. “Due to the lack of widespread training on distributed data, we lack a critical mass of open-source software implementations of popular approaches … By everyone working together, we aim to have the world’s largest set of open-source federated techniques available on Flower for the community.”

TSMC, Bosch, Infineon, and NXP Forge Joint Venture to Propel Advanced Semiconductor Manufacturing in Europe

TSMC (Taiwan Semiconductor Manufacturing Company), Robert Bosch GmbH, Infineon Technologies AG, and NXP Semiconductors N.V. today announced their joint investment in the European Semiconductor Manufacturing Company (ESMC) GmbH, located in Dresden, Germany.

This strategic initiative aims to introduce cutting-edge semiconductor manufacturing capabilities to Europe, setting the stage for future innovation and growth in the automotive and industrial sectors.

The planned joint venture will be 70% owned by TSMC, with Bosch, Infineon, and NXP each holding 10% equity stake, subject to regulatory approvals and other conditions. Total investments are expected to exceed 10 billion euros consisting of equity injection, debt borrowing, and strong support from the European Union and German government. The fab will be operated by TSMC.

TSMC also approved an equity investment of 3.5 billion euros ($3.8 billion) to a European Semiconductor Manufacturing Company (ESMC) GmbH, to provide foundry services

Underpinned by the ambitious European Chips Act, the establishment of ESMC heralds a pivotal advancement in semiconductor manufacturing in Europe. The project envisions the construction of a state-of-the-art 300mm fabrication facility. The anticipated fabrication plant is poised to feature a robust monthly production capacity of 40,000 300mm (12-inch) wafers, leveraging TSMC’s acclaimed 28/22 nanometer planar CMOS and 16/12 nanometer FinFET process technology.

Construction of the facility is slated to commence in the latter half of 2024, with production scheduled to commence by the close of 2027.

TSMC has also committed a substantial $40 billion investment towards establishing a new facility in the western United States, specifically in Arizona. This move aligns with efforts to bolster domestic chip manufacturing in line with Washington’s strategic objectives. Moreover, TSMC’s global expansion endeavors extend to Japan, where it is collaborating with Sony in the joint venture construction of a new plant.

The post TSMC, Bosch, Infineon, and NXP Forge Joint Venture to Propel Advanced Semiconductor Manufacturing in Europe appeared first on Analytics India Magazine.

5 Python Packages For Geospatial Data Analysis

Introduction

Geospatial data analysis is critical in urban planning, environmental research, agriculture, and transportation industries. The growing need has led to an increase in the use of Python packages for various geographic data analysis requirements, such as analyzing climate patterns, investigating urban development, or tracking the spread of diseases, among others. Evaluating and selecting the right tools with quick processing, modification, and visualization capabilities is essential to effectively analyze and visualize geospatial data.

Understanding Geospatial Data

It is essential first to understand what geospatial data is. Geospatial data is data with a geographic or geographical component representing the position and qualities of objects, features, or occurrences on the Earth's surface. It describes the spatial connections, distributions, and properties of diverse items in the physical universe. Geospatial data is primarily of two types:

  • Raster data: It is suitable for continuous information without fixed borders, represented as a grid of cells with values indicating observed features. It is often monitored at regular intervals and interpolated to create a continuous surface.
  • Vector data: It uses points, lines, and polygons to represent spatial properties, including points of interest, transportation networks, administrative boundaries, and land parcels, often used for discrete data with precise positions or hard constraints.

Geospatial data may be stored in a variety of formats, such as:

  • ESRI Shapefile
  • GeoJSON
  • Erdas Imagine Image File Format (EIF)
  • GeoTIFF, Geopackage (GPKG)
  • GeoJSON, Light Detection
  • Ranging (LiDAR), and many others.

Geospatial data encompasses various types, such as satellite images, elevation models, point clouds, land use classifications, and text-based information, offering valuable insights for spatial analysis and decision-making across industries. Major corporations like Microsoft, Google, Esri, and Amazon Web Services leverage geospatial data for valuable insights. Let's explore the top five Python packages for geospatial data analysis. These packages enable data reading/writing, manipulation, visualization, geocoding, and geographical indexing, catering to beginners and experienced users. Discover how these packages empower effective exploration, visualization, and insights extraction from geospatial data. Let's begin!

1. Geopandas

Suitable for: Vector Data

Geopandas is a widely used Python library for working with vector geospatial data, providing intuitive geographic data handling in Pandas DataFrames. It supports formats like Shapefiles and GeoJSON and offers spatial operations such as merging, grouping, and spatial joins. Geopandas integrates seamlessly with popular libraries like Pandas, NumPy, and Matplotlib. It can handle large datasets, but this can pose challenges. Geopandas package is commonly used for spatial data analysis tasks, including spatial joins, queries, and geospatial operations like buffering and intersection analysis. Geopandas requires different packages like Shapely to handle geometric operations, Fiona to access files, and matplotlib for plotting.

For example, Geopandas can be used to explore real estate data to identify the most expensive neighborhoods in a city or to analyze population data to visualize the growth and migration patterns of different communities.

We can use pip to install the package:

pip install geopandas

Plotting with GeoPandas

Let us view the built-in maps as shown below:

import geopandas   # Check available maps  geopandas.datasets.available

We will use Geopandas to load a dataset of the world map, extract the shapefile for the United States, and plot it on a graph with the following code:

# Selecting a particular map  geopandas.datasets.get_path('naturalearth_lowres')  # Open the selected map - GeoDataFrame  world = geopandas.read_file(geopandas.datasets.get_path('naturalearth_lowres'))  # Create a subset of the GeoDataFrame  usa = world[world.name == "United States of America"]  # Plot the subset  usa.plot();

The above code prints a map of the subset data frame:

5 Python Packages For Geospatial Data Analysis 2. Folium

Suitable for: Point clouds

Folium is a Python library for creating interactive maps with markers, pop-ups, choropleths, and other geospatial visualizations. It integrates with the Leaflet JavaScript library and allows exporting maps to HTML. It can be combined with Geopandas and Cartopy and handles large datasets using Map Tiles. Folium excels in simplicity, aesthetics, and integration with other geospatial libraries. However, for advanced geospatial analysis and manipulation, Folium may have limitations.

For example, Folium could be utilized in supply chain and logistics for visualizing distribution networks, optimizing routes, and monitoring shipment locations.

We can install Folium with the following command:

pip install folium

Plotting with Folium

Let us print a sample interactive map centered at [0, 0] with a marker placed at the same location with the following lines of code:

import folium  # Generate a Folium map with center coordinates (0, 0)  map = folium.Map(location=[0, 0], zoom_start=2)  # Locate the coordinates 0, 0  folium.Marker([0, 0]).add_to(map)  # Display the map  map

5 Python Packages For Geospatial Data Analysis

This map can be customized further by adding markers, layers, or styling options based on specific geospatial data.

3. ipyleaflet

Suitable for: Point clouds, interactive

The ipyleaflet package enables the easy creation of interactive maps in Python, particularly within Jupyter notebooks, allowing users to generate and share interactive maps with various basemaps, markers, and other geospatial operations. Built on the leaflet JavaScript library, ipyleaflet supports GeoJSON and WMS layers, CSS and JavaScript styling, and geospatial calculations. While ipyleaflet excels in interactive widgets, it may not be ideal for pure Python-based projects due to its JavaScript dependency.

For example, ipyleaflet can be applied in environmental monitoring to visualize sensor data, monitor air quality, and assess environmental changes in real time.

To install ipyleaflet, we use the pip command:

pip install ipyleaflet

Plotting with ipyleaflet

Let us create an interactive map with a marker placed at the coordinates (40.7128, -74.0060) to represent a point of interest in New York City using the code below:

from ipyleaflet import Map, Marker  # Create the map  m = Map(center=(40.7128, -74.0060), zoom=12)  # Add the market  marker = Marker(location=(40.7128, -74.0060))  m.add_layer(marker)

Here is an output for the code:

5 Python Packages For Geospatial Data Analysis 4. Rasterio

Suitable for: Raster data

Rasterio is a powerful Python library for working with geospatial raster data, offering efficient performance and a wide range of operations like cropping, reprojecting, and resampling. It supports various raster formats and integrates well with other geospatial libraries, although it has limitations in handling vector data and complex analysis tasks. Nevertheless, Rasterio is an essential tool for efficient raster data manipulation and processing in Python.

For example, rasterio can be used in tasks such as reading and writing satellite imagery, performing terrain analysis, extracting data from digital elevation models, and conducting remote sensing analysis.

!pip install rasterio

The rasterio.open() function opens the file, and the read() method reads the image as a numpy array. Finally, the plt.imshow() function from Matplotlib is used to display the image, and plt.show() shows the plot in the output.

Plotting with rasterio

import rasterio  from rasterio.plot import show

We use the rasterio library to open and visualize a raster image from the 'sample.tif' file from the dataset ‘High-resolution GeoTIFF images of climatic data’ on kaggle, displaying the red channel (one of the color channels in the image) as a subplot with a Reds color map, and the original image (comprising multiple color channels) as another subplot with a viridis color map. Other color channels, such as green and blue, can also be visualized using this approach.

src = rasterio.open('/content/sample.tif')  plt.figure(figsize=(15,10))  fig, (axr, axg) = plt.subplots(1,2, figsize=(15,7))  show((src, 1), ax=axr, cmap='Reds', title='red channel')  show((src), ax=axg, cmap='viridis', title='original image')  plt.show()

5 Python Packages For Geospatial Data Analysis
Original GeoTIFF Image (right) source: kaggle.com

Analyzing specific color channels such as red, blue, and green in geospatial analysis is done to focus on or extract valuable information related to specific attributes, features, or characteristics represented by those color components of the image. Examples could include vegetation health in remote sensing, vegetation indices or water bodies, etc.

5. Geoplot

Suitable for: vector data, interactive

Geoplot is a user-friendly Python library for quickly creating visually appealing geospatial visualizations, including choropleth maps and scatter plots. It seamlessly integrates with popular data manipulation libraries like Pandas and supports multiple map projections. However, Geoplot has limitations regarding interactive map support and a smaller range of plot types than specialized geospatial libraries. Nonetheless, it remains valuable for quick geospatial data visualization and gaining insights into spatial patterns.

!pip install geoplot

Plotting with geoplot

We will plot a choropleth map visualization using Geoplot, where we select the Asian countries from a world shapefile based on the "continent" attribute, assign the color intensity based on the "pop_est" attribute, and plot the map using the "icefire" color map with a legend with a figure size of 10 by 5.

import geoplot  #Plotting population for Asia  asia = world.query("continent == 'Asia'")  geoplot.choropleth(asia, hue = "pop_est", cmap = "icefire",legend=True, figsize = (10, 5));

5 Python Packages For Geospatial Data Analysis

For example, the geoplot package can create choropleth maps to visualize population density, plot spatial patterns of crime incidents, display the distribution of environmental factors, and analyze the spread of diseases based on geographical data.

Conclusion

In conclusion, the geospatial Python packages help effectively analyze location-based information. Each of the discussed packages has its strengths and weaknesses, but together they can form a powerful suite of Python tools when working with geospatial data. So, for beginners or seasoned GIS professionals, these packages are valuable for analyzing, visualizing, and manipulating geospatial data in new and innovative ways.

You can find the code for this article on my GitHub repository here.

If you found this article insightful, connect with me on LinkedIn and Twitter. Remember to follow me on Kaggle, where you can access my machine learning and deep learning projects, notebooks, and captivating data visualizations.
Devashree Madhugiri holds an M.Eng degree in Information Technology from Germany and has a background in Data Science. She likes working on different Machine Learning and Deep Learning projects. She enjoys sharing her knowledge in AI by writing technical articles related to data visualization, machine learning, computer vision on various technological platforms. She is currently a Kaggle Notebooks Master and loves solving Sudoku puzzles in her leisure time.

More On This Topic

  • Top Data Python Packages to Know in 2023
  • Building a Geospatial Application in Python with Google Earth Engine and…
  • 10 Underappreciated Python Packages for Machine Learning Practitioners
  • E-commerce Data Analysis for Sales Strategy Using Python
  • The 20 Python Packages You Need For Machine Learning and Data Science
  • 3 Julia Packages for Data Visualization

TSMC Fetches Money for Apple

In the recent earnings report, Apple saw a decline in sales of its three major products— iPhones, iMac and iPads. iPhone sales revenue went down by 2.4% to $39.7 billion in this quarter as compared to the corresponding quarter last year. While Mac Book sales fell by 7.3% to $6.8 billion.

One of the expected reasons for the same can be attributed to the absence of a new processor in them. The iPhone 14 and 14 Plus were powered by Apple’s A15 Bionic chip, the same chip that’s in the iPhone 13 Pro. It seems like Apple is not going to repeat the same mistake this time and is planning to pack a punch in the new Macbook and iPhone 15 Pro.

TSMC’s Generous Deal

When the iPhone 15 Pro launches in September, this year, it is rumored to come up with the A17 chip, Apple’s first ‌ 3nm developed by TSMC.

The ‌3nm‌ node allows transistors to be even more densely packed, resulting in better performance and efficiency. Recently, The Information came out with an interesting revelation which says that TSMC has come up with a sweet deal for Apple through which it would be able to save billions of dollars on iPhone, iPad and Mac chips.

When new and better chip technology like the ‌3nm‌ is introduced, there are often some chips that don’t work perfectly at first. TSMC, the company making these chips for Apple, usually charges for all the chips on a wafer, even the ones that don’t work. But in this new deal, TSMC is only charging Apple for the good chips, and Apple’s big orders help cover the costs of the chips that don’t work, which is unusual.

Apple covers about 25% of TSMC’s revenue, which helps TSMC develop new technology and build the facilities needed to make these chips while testing them on Apple products.The orders from Apple are so big that it makes sense to spend a bit more money to help Apple out.

The strong bond between Apple and TSMC is no shocker, given their nearly decade-long partnership. The journey kicked off around ten years ago when Apple shifted its iPhone’s processor chip production to TSMC. The first Apple designed mobile application processor chip for the iPhone to be manufactured at TSMC was the A8 which began shipping in the Fall of 2014 after Apple broke up with Samsung.

The same thing happened with the Mac when Apple broke up with Intel. The year was 2020 when Apple announced that in the next two years, the company would transition all of its Mac lineup to its own M-series chips, made by TSMC rather than depending on Intel. It’s unlikely that Apple will reignite its relationship with Intel.

Not only this , Apple has recently started trying out its advanced laptop processor called M3 Max. This will lead to the launch of the most powerful MacBook Pro ever in the coming year. The M3 Max chip comes with 16 main processing cores and 40 graphics cores, as reported by Bloomberg, citing a developer of a Mac app who saw the test logs.

This chip will be used in a high-end MacBook Pro laptop, which is expected to be released next year and is known by the codename J514. The M3 chip will be a notable move for Apple, as it’s their first time using a 3-nanometer production process for Mac chips. This change is expected to bring improvements in battery life and better performance. According to Bloomberg, the transition to M3 chips is likely to begin from October.

Will Apple diversify?

Although industry experts think Apple should diversify its chip suppliers beyond TSMC, the reality is that Apple has consistently relied on TSMC for the past ten years. This relationship is symbiotic, where Apple receives top-notch chips and TSMC gains its biggest customer. Apple’s influence is significant, to the extent that TSMC is even considering building a production facility in Arizona, USA. Apple has stated its plan to eventually get chips for iPhones and MacBooks from TSMC’s US plant. It won’t be wrong to say that behind every successful Apple product there is TSMC which is working silently behind the scenes without grabbing any limelight.

The post TSMC Fetches Money for Apple appeared first on Analytics India Magazine.

KPMG’s Generative AI Play

Over the past one year, the consulting and finance sector has moved on from apprehension to enthusiasm on generative AI. One of The Big Four accounting firms, KPMG, has been using a ChatGPT-like framework to devise an in-house system that aids its staff by utilising proprietary data.

What’s more? KPMG also announced a commitment to spending $2 billion in an expanded AI alliance with Microsoft in July. The company had said back in December 2019 that it would allocate $5 billion for five years in advanced technologies like AI.

Meanwhile, PwC plans to allocate $1 billion towards advancing generative AI in its US activities in the next three years. Collaborating with Microsoft and OpenAI, the company aims to automate elements of its tax, audit, and consulting functions. Numerous teams are currently engaged in developing a multitude of AI and generative AI applications to enhance efficiency, reduce costs, save time, and uncover fresh perspectives.

Another Big Four, EY, is using generative AI for targeted business tasks like payroll queries. They’ve integrated tax laws into an AI system, allowing instant answers through a ChatGPT-like interface. This beta experiment has led to significant gains in efficiency and accuracy. As expected, Deloitte also launched a generative AI practice to equip its clients.

Speaking with AIM in an exclusive interaction, Sachin Arora, partner and head at Lighthouse (Data, AI, and Analytics), KPMG India, sheds light on the initial hesitation, the evolving mindset, and the innovative implementations that are shaping the future of these sectors.

Gen AI in Action

Last month, KPMG and Microsoft joined hands to modernise professional services with AI solutions, especially generative AI, aiming to streamline client engagement in auditing, taxation, and advisory sectors with Microsoft Cloud and Azure OpenAI service. KPMG plans to invest $2 billion in Microsoft Cloud and AI services over five years.

Earlier this year, Google Cloud and KPMG partnered to incorporate advanced generative AI into their processes. This venture will combine KPMG’s expertise in cloud computing, data analysis, and ethical AI practices with Google Cloud’s cutting-edge infrastructure and generative AI proficiencies.

“Collaborating with a GenAI service provider can be significant as it would impact the business. Aspects such as reputation and track record, customization and flexibility, data privacy and security, ethical considerations, scalability and performance, intellectual property considerations, future roadmap and last, but not the least, cost and pricing structures are key to the selection of the right GenAI service provider,” said Arora

At KPMG, the integration of generative AI involves leveraging open-source vector embeddings and databases. This approach facilitates the seamless incorporation of organisational data into widely-used language models, expediting responses and enhancing interactions. By utilising this custom framework, KPMG is at the forefront of embracing generative AI for enhanced customer experiences and operational efficiency.

“Today, generative AI is revolutionising how consulting and finance firms operate across various dimensions. The integration of generative AI is evident in diverse use cases,” he added. According to him, AI plays a multifaceted role: it boosts customer support through chatbots, improves data analysis and predictions for better decision-making, aids marketing with AI-generated content, and tailors financial services by offering personalised banking assistance and investment plans.

Overcoming Early Hurdles

“The initial reluctance stemmed from data security and privacy worries due to sensitive data involvement, ethical concerns regarding biassed or misleading AI-generated content, and the challenge of adhering to stringent regulatory frameworks,” said Arora.

Initially wary of venturing into generative AI, consulting and finance sectors faced huge challenges around data security, ethics, and regulations. Ethical concerns arose over misleading AI-generated content in financial advice and consulting services, raising regulatory and reputational risks. The stringent regulatory landscape further exacerbated hesitancy.

However, a transformative shift has occurred over time, exemplified by firms like KPMG. This change is driven by technological advancements, making AI more accessible and user-friendly. Pre-built AI platforms facilitated experimentation, leading to successful generative AI projects. Innovations in data management addressed data privacy apprehensions, with privacy-preserving AI techniques ensuring secure implementation. The establishment of ethical guidelines and standards enabled responsible AI usage and mitigated ethical dilemmas tied to AI-generated content.

Looking ahead, KPMG envisions significant developments in the realm of GenAI, machine learning, and analytics over the coming months. In the short term, the integration of GenAI with traditional AI and analytics is expected to substantially boost employee productivity. This amalgamation has the potential to expedite the rollout of new products and developments, disrupting conventional information brokering practices.

Read more: When Google Thinks It Owns the Internet

The post KPMG’s Generative AI Play appeared first on Analytics India Magazine.

This Week in AI, August 7: Generative AI Comes to Jupyter & Stack Overflow • ChatGPT Updates

### ALT ###
Image created by Editor with Midjourney

Welcome to this week's edition of "This Week in AI" on KDnuggets. This curated weekly post aims to keep you abreast of the most compelling developments in the rapidly advancing world of artificial intelligence. From groundbreaking headlines that shape our understanding of AI's role in society to thought-provoking articles, insightful learning resources, and spotlighted research pushing the boundaries of our knowledge, this post provides a comprehensive overview of AI's current landscape. This weekly update is designed to keep you updated and informed in this ever-evolving field.

Headlines

The "Headlines" section discusses the top news and developments from the past week in the field of artificial intelligence. The information ranges from governmental AI policies to technological advancements and corporate innovations in AI.

💡 Generative AI in Jupyter

The open source Project Jupyter team has released Jupyter AI, a new extension that brings generative AI capabilities directly into Jupyter notebooks and the JupyterLab IDE. Jupyter AI lets users leverage large language models via chat interactions and magic commands to explain code, generate new code and content, answer questions about local files, and more. It was built with responsible AI in mind, allowing control over model selection and tracking of AI-generated output. Jupyter AI supports providers like Anthropic, AWS, Cohere, and OpenAI. It aims to make AI accessible in an ethical way to enhance the Jupyter notebook experience.

💡 Announcing OverflowAI

Stack Overflow announced OverflowAI, their integration of AI capabilities into their public Q&A platform, Stack Overflow for Teams, and new products like IDE extensions. Features include semantic search to find more relevant results, ingesting enterprise knowledge to bootstrap internal Q&A faster, a Slack chatbot accessing Stack Overflow content, and a VS Code extension surfacing answers in developers' workflows. They aim to leverage their community's 58M+ questions while ensuring trust via attribution and transparency around AI-generated content. The goal is to use AI responsibly to enhance developers' efficiency by connecting them with solutions in context.

💡 ChatGPT Updates

Over the past week, several small updates were rolled out to enhance the ChatGPT experience. These updates included the introduction of prompt examples to help users begin chats, suggested replies for deeper engagement, and preferences for using GPT-4 by default for Plus users. Additional features such as multi-file uploads in the Code Interpreter beta for Plus users, a new stay-logged-in function, and a suite of keyboard shortcuts were also introduced to improve usability.

Articles

The "Articles" section presents an array of thought-provoking pieces on artificial intelligence. Each article dives deep into a specific topic, offering readers insights into various aspects of AI, including new techniques, revolutionary approaches, and ground-breaking tools.

📰 I Created An AI App In 3 Days

The author experimented with ChatGPT prompts to create an AI-powered cover letter generator web application called Tally.Work in just 3 days, using Bubble.io for the frontend and the OpenAI API for generating text. It takes a user's resume and job description as inputs and outputs a customized cover letter. The goal was to build an app with a large potential user base. Though AI-generated text isn't perfect yet, it can help create a useful first draft. The author believes AI will eliminate many tedious tasks like cover letters, and hopes this project helps lead to more interesting AI apps in the future. Overall it shows how quickly someone can use no-code tools and AI APIs to build and launch an app idea.

📰 Three challenges in deploying generative models in production

The article discusses three main challenges in deploying generative AI models like GPT-3 and Stable Diffusion in production: their massive size leading to high compute costs, biases that can propagate harmful stereotypes, and inconsistent output quality requiring tuning. Solutions include model compression, training on unbiased data, post-processing filters, prompt engineering, and model fine-tuning. Overall it outlines how companies must carefully address these issues to successfully leverage generative models while avoiding potential downsides.

Tools

The "Tools" section lists useful apps and scripts created by the community for those who want to get busy with practical AI applications. Here you will find a range of tool types, from large comprehensive code bases to small niche scripts. Note that tools are shared without endorsement, and with no guarantee of any sort. Do your own homework on any software prior to installation and use!

🛠️ Robot Writers Room

This repository demonstrates using AI to brainstorm and refine story ideas collaboratively with a human. Rather than replacing the human, the AI acts as a creative partner, suggesting ideas and doing research. At each step, the human can accept, reject, or modify the AI's suggestions. One of the main challenges in writing is coming up with ideas. This project aims to help writers overcome writer's block by providing a creative partner to bounce ideas off of.

🛠️ Gdańsk AI

Gdańsk AI is a full stack AI voice chatbot (speech-to-text, LLM, text-to-speech) with integrations to Auth0, OpenAI, Google Cloud API and Stripe — Web App, API and AI

Research Spotlight

The "Research Spotlight" section highlights significant research in the realm of AI. The section includes breakthrough studies, exploring new theories, and discussing potential implications and future directions in the field of AI.

🔍 ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs

The paper introduces ToolLLM, a framework to enhance the tool-using abilities of open-source large language models. It constructs a dataset called ToolBench containing instructions involving 16,000 real-world APIs across 49 categories. ToolBench is automatically generated using ChatGPT with minimal human involvement. To improve reasoning, the authors propose a depth-first search decision tree method that allows models to evaluate multiple reasoning traces. They also develop an automatic evaluator ToolEval to efficiently assess tool-use capabilities. By fine-tuning LLaMA on ToolBench, they obtain ToolLLaMA which demonstrates strong performance on ToolEval, including generalizing to unseen APIs. Overall, ToolLLM provides a way to unlock sophisticated tool use in open-source LLMs.

🔍 MetaGPT: Meta Programming for Multi-Agent Collaborative Framework

The paper introduces MetaGPT, a framework to improve large language model collaboration on complex tasks. It incorporates real-world standardized operating procedures into prompts to guide multi-agent coordination. Roles like ProductManager and Architect produce structured outputs matching industry conventions. A shared environment and memory enable knowledge sharing. On software tasks, MetaGPT generated more code, documents, and higher success rates than AutoGPT and AgentVerse, showing its ability to decompose problems across specialized agents. The standardized workflows and outputs aim to reduce incoherence in conversations. Overall, MetaGPT demonstrates a way to capture human expertise in agents to tackle intricate real-world problems.

More On This Topic

  • Stack Overflow Survey Data Science Highlights
  • ChatGPT, GPT-4, and More Generative AI News
  • Google Answer to ChatGPT by Adding Generative AI into Docs and Gmail
  • 8 Ways to Improve Your Search Application this Week
  • Free 4 Week Data Science Course on AI Quality Management
  • Visual ChatGPT: Microsoft Combine ChatGPT and VFMs

Rewiring business and the future with AI

Rewiring business and the future with AI

Just like the discovery of fire sparked previously unfathomable progress for humans, AI – and now generative AI – offers fundamentally new ways for businesses and societies to tackle major challenges and unexplored potential. But this new technology also presents uncertainty and challenges, especially for those enterprise leaders trying to force generative AI into old business models without a strategy.

Amid these changes, analytics leaders are central to seizing this moment if they act swiftly to avoid getting left on the sidelines.

From organic experimentation to scaling enterprise adoption

Businesses have been trying to bring AI into their organizations for some time, but with the arrival of gen AI, those aspirations have increased. A recent study by McKinsey Digital estimates that generative AI could bring in an additional $4 trillion in economic value annually[1]. In addition, software development cycles could double[2], and our own experience shows a 30–40% improvement in collaboration and decision-making.

That said, organic adoption is mostly among developers and lacks structure or measurement. Few companies have moved beyond this phase, which is why data and analytics leaders must take centre stage. Their data management skills are key. But coupled with the ability to bring all the right roles to the table – business leaders, IT, risk and compliance, legal, and people functions – analytics leaders are crucial.

The building blocks to maximize AI-powered outcomes

The shift to becoming an AI-first enterprise isn’t easy. But it must be done quickly: 57% of Fortune 1000 companies report that their boards expect a double-digit increase in revenue from AI/ML investments in the coming fiscal year[3].

Based on our experience helping enterprises scale AI, we’ve identified several building blocks for success. They include embedding analytics in processes and workflows, establishing a cloud-based technical architecture, and creating a scalable operating model. But here are three steps to focus on in particular:

1. Prioritize the right use cases

With so many places to start, deciding where your investments will deliver the greatest value can be difficult. I recommend using generative AI where you don’t need high levels of accuracy or deterministic results, focusing on the opportunities with the greatest potential.

For instance, in healthcare, enhancing the patient experience is a core goal. Generative AI empowers healthcare practitioners to make better decisions by creating personalized patient health summaries that they can review based on encounter and claims data. As a result, healthcare professionals can speed up patient response times and improve patient outcomes.

As you weigh up opportunities, be mindful not to fall into the productivity trap. Instead, prioritize creating end-to-end value and reimaging outcomes – not just doing the same things faster.

2. Prepare employees

Becoming an AI-first enterprise requires significant change management and upskilling, especially as many employees are wondering, “What will happen to my job?”

AI will empower employees, open new career opportunities, and allow them to tap into the full value of their unique knowledge, augmenting their impact on the business. But only if you invest in upskilling them. And again, velocity is key.

We’re on this journey ourselves, deepening data literacy skills across the business. We’ve given 70,000 employees new data skills since 2021, and over the past two months alone, we’ve trained 20% of our colleagues in generative AI – we will reach 40,000 by year-end. Generative AI is at play here too, giving learners instant access to Genpact’s collective intelligence through our learning platform, Genome.

3. Make AI-driven decisions responsibly

Demonstrating that your AI practices are responsible and explainable has always been important, but even more so now that employees in any part of your business can now access generative AI tools independently. With generative AI, we’ve entered a maze of ethics, copyright, and intellectual property complications.

Developing a strategy for these evolving concerns is daunting, and many companies feel ill-equipped to do so. That’s why we’ve created a responsible generative AI framework that you can use as a launching pad or a plug-and-play solution that maintains your reputation by:

  • Protecting your IP, data security, and models
  • Taking into account how responsible AI differs by region and industry with experts who understand the regulations
  • Enabling responsible decision-making


Moving beyond point solutions to AI-driven business models

Put together, these building blocks lead to a major shift for enterprises. As our board member Ajay Agrawal points out in his book, Power and Prediction, enterprise leaders have limited most AI projects to date to point solutions – quick fixes to existing problems without addressing the underlying systems. While these initiatives may increase productivity or provide incremental improvement, they fall short of AI’s full potential.

But with systems-level AI implementation, businesses can realize exponential gains. In the past, companies might have used AI to automate customer service chats. Now, systems-level implementation could turn data from customer service chats into insights that inform decision-making at speed, from product innovation to pricing to how to improve the customer experience.

Achieving this objective requires the right tech stack and technology partners and a scalable operating model that allows chief data and analytics officers to adapt their roles and lead this shift toward AI-first businesses. The enterprises that remain agile enough to shift quickly and realign their business models with an AI-first strategy stand to gain the most from the AI revolution.

What’s next?

As we look to the future, centuries-old stories still hold lessons for us in the age of gen AI.

Greek mythology introduces us to Prometheus, one of the Titans, who stole fire and gave it to humanity. It unlocked knowledge, technology, and advanced civilizations. As analytics leaders, you’re our Prometheus. You can take the fire started by AI and generative AI and turn it into an unimaginable impact that will advance not only your business but the future of our societies and the planet.

_____________________________________________________

The insights in this blog were part of Genpact’s keynote at MachineCon in New York. Watch the recording here.

[1.] Aamer Baig et al., “Technology’s generational moment with generative AI: A CIO and CTO guide,” McKinsey Digital, July 11, 2023.

[2.] Begum Karaci Deniz et al., “Unleashing developer productivity with generative AI,” McKinsey Digital, June 27, 2023.

[3.] ClearML, “Transforming Generative AI Investments into Business Value: Fortune 1000 Survey Reveals Top Challenges and Economic Impact,” ACCESSWIRE, July 19, 2023.

The post Rewiring business and the future with AI appeared first on Analytics India Magazine.

ElevenLabs Introduces Real-Time Streaming for Text-to-Speech, Offers Multilingual Experience Similar to Google’s Bard

In a significant stride, ElevenLabs has unveiled a new feature – input streaming for generating speech in real-time with remarkable sub-1-second latency. This cutting-edge capability, available via the ElevenLabs platform, enables users to listen to Large Language Model (LLM) responses as they’re being crafted.

We have just released input streaming, which allows you to stream LLM responses and generate speech in real-time – all possible with sub-1-second latency.
Try it today: https://t.co/JcUgx0BElg https://t.co/uvnUOU8t0q

— ElevenLabs (@elevenlabsio) August 7, 2023

Elevating the experience further, ElevenLabs introduces the eleven_multilingual_v1 model, presenting an array of voices that breathe life into content. This model supports a diverse range of languages, including English, German, Polish, Spanish, Italian, French, Portuguese, and Hindi.

With just a few lines of code, creators and developers can harness the richness of these voices, painting a captivating auditory landscape. In addition to this, you have the option to choose different voices and even clone your own voice.

Interestingly, the tool’s features bear a resemblance to Google’s Bard, a multilingual text-to-speech marvel. Bard’s expansion to 40 new languages, including Arabic, Chinese, German, and Spanish, has broadened its global reach.

Both ElevenLabs and Bard cater to a multilingual audience, offering spoken outputs across various languages.While Bard flaunts Google’s efforts in nurturing it with extensive content to ensure accuracy, ElevenLabs opens doors to real-time text streaming, providing a dynamic and immediate auditory experience.

Whether exploring pronunciation or simply relishing the auditory rendition, ElevenLabs and Bard create a symphony of linguistic possibilities for users worldwide.

Interestingly, ChatGPT from OpenAI lacks a built-in text-to-speech model, leaving a notable gap in its capabilities. It seems this is the one element yet to be included in OpenAI’s toolkit. Perhaps a cue could be taken from ElevenLabs, which has introduced innovative features in the same arena. Unlike Whisper API, which facilitates speech-to-text, OpenAI hasn’t rolled out a comparable API.

The post ElevenLabs Introduces Real-Time Streaming for Text-to-Speech, Offers Multilingual Experience Similar to Google’s Bard appeared first on Analytics India Magazine.