The Truth Behind OpenAI’s Silence On GPT-4

OpenAI is Not “Open” AI

In March, OpenAI launched GPT-4 to great fanfare, but a dark cloud loomed on the horizon. Scientists and AI enthusiasts alike panned the company for not releasing any specifics about the model, like the parameter size or architecture. However, a top AI researcher has speculated on the inner workings of GPT-4, revealing why OpenAI chose to hide this information; it’s disappointing.

OpenAI CEO Sam Altman famously stated on GPT-4 that “people are begging to be disappointed, and they will be”, speaking about the potential size of the model. Indeed, rumours abound ahead of the model’s launch that it will have trillions of parameters and be the best thing that the world has ever seen. However, the reality is much more plain than it seems. In the process of making GPT-4 better than GPT-3.5, OpenAI might have bitten off more than it can chew.

8 GPTs in a trenchcoat

George Hotz, world-renowned hacker and software engineer, recently appeared on a podcast to speculate about the architectural nature of GPT-4. Hotz stated that the model might be a set of eight distinct models, each featuring 220 billion parameters. This speculative statement was later confirmed to be true by Soumith Chintala, the co-founder of PyTorch.

While this puts the parameter count of GPT-4 at 1.76 trillion, the notable part is that all of these models don’t work at the same time. Instead, they are deployed in a mixture of expert architecture.

This architecture makes each model into different components, also known as expert models. Each of these models is fine-tuned for a specific purpose or field, and is able to provide better responses for that specific field. Then, all of the expert models work together, with the complete model drawing on the collective intelligence of the expert models.

This approach has many benefits. One is that of more accurate responses due to models being fine-tuned on various subject matters. MoE architecture also lends itself to being easily updated, as the maintainers of the model can improve it in a modular fashion, as opposed to updating a monolithic model.

Hotz also speculated that the model may be relying on the process of iterative inference for better outputs. Through this process, the output, or inference result of the model, is refined through multiple iterations.

This method also might allow GPT-4 to get inputs from each of its expert models, which could reduce the amount of hallucinations by the model. Hotz stated that this process might be done 16 times, which would vastly increase the operating cost of the model.

This approach has been likened to the old trope of 3 children in a trenchcoat masquerading as an adult. Many have likened GPT-4 to be 8 GPT-3’s in a trench coat, trying to pull the wool over the world’s eyes.

Cutting corners

While GPT-4 aced benchmarks that GPT-3 has had difficulties with, the MoE architecture seems to have become a pain point for OpenAI. In a now-deleted interview, Altman admitted to the scaling issues OpenAI is facing, especially in terms of GPU shortages.

Running inference 16 times on a model with MoE architecture is sure to increase cloud costs on a similar scale. When blown up to ChatGPT’s millions of users, it’s no surprise that even Azure’s supercomputer fell short of power. This seems to be one of the biggest problems that OpenAI is facing currently, with Sam Altman stating that cheaper and faster GPT-4 is the company’s top priority as of now.

This has also resulted in a reported degradation of quality in ChatGPT’s output. All over the Internet, users have reported that the quality of even ChatGPT Plus’ responses have gone down. We found a release note for ChatGPT that seems to confirm this, which stated, “We’ve updated performance of the ChatGPT model on our free plan in order to serve more users.”. In the same note, OpenAI also informed users that Plus users would be defaulted to the “Turbo” variant of the model, which has been optimised for inference speed.

API users, on the other hand, seem to have avoided this problem altogether. Reddit users have noticed that other products which use the OpenAI API provide better answers to their queries than even ChatGPT Plus. This might be because users of the OpenAI API are lower in volume when compared to ChatGPT users, resulting in OpenAI cutting costs at ChatGPT while ignoring the API.

In a mad rush to get GPT-4 out to the market, it seems that OpenAI has cut corners. While the purported MoE model is a good step forward for making the GPT series more performant, the scaling issues that it is facing show that the company might just have bitten off more than it can chew.

The post The Truth Behind OpenAI’s Silence On GPT-4 appeared first on Analytics India Magazine.

How Generative AI is Reshaping the Landscape of the Metaverse

How Generative AI is Reshaping the Landscape of the Metaverse

There was much interest in the possibilities around Metaverse captured during the pandemic as people were looking for more meaningful ways to connect with each other. However, since then, the hype around the metaverse has declined. As it turns out, the latest technology to capture public attention, Generative AI, can play a significant role in empowering interactions and development on the Metaverse. The creative applications of generative AI can fill in for the huge demands for content that developers need to create in virtual worlds with 3D assets.

Faster pace and more imaginative

Prajod Vettiyattil, Principal Architect at Fractal, explains just how. “Generative AI can help because it – can create metaverse content from text descriptions. 3D modelling is normally labour-intensive work. There is a combination of skills required – you need to have some amount of artistic knowledge about colours and textures and their combinations—it’s not just about creating the 3D model. Say, for a bank, a school, or a store, certain kinds of colour combinations and lighting that denote these themes must be envisioned and used. Then, some videos and images must be created and integrated into the 3D model”, he explains.

Generative AI makes it easier to create virtual spaces, and the customisations that generative AI tools offer – can make the artistry much more appealing, expanding the options at hand. Currently, each space and experience in the metaverse is painstakingly created and curated by human designers and developers. It takes weeks, or sometimes even months, to generate each space.

“Generative AI can be a good tool while ideating and not just when creating the final environment. For example, many Generative AI tools used for 2D visualisations like MidJourney and DALL.E 2, can be prompted to emulate the style of any artist you choose, like Van Gogh”, Vineet, Senior Consultant at Fractal stated.

“The power to create avatars in the metaverse is the most interesting and complex aspect of the metaverse. An avatar will be a bot that can move, talk, and interact like a human. An avatar can be a representation of a human existing in the real world or a virtual-only being with no representation in the real world. We can use Generative AI to simulate realistic thoughts, physical appearance, voice, and movements in the metaverse. Generative AI can help build AI bots with emotional and conversational characteristics that are personalised for your preferences, much like the AI Virtual Assistant, in the film “Her”, Prajod said.

However, it’s the simplicity with which generative AI does it, that makes the world of difference. Prajod explained, “The key point in the value of generative AI is that automating the creation of interactive content and experiences becomes very easy. For example, making animated films like Avatar takes years because of hundreds and hundreds of hours of animation work and many iterations that go into it. But with generative AI, you can do it in months”.

Enhanced immersion

Vineet believes that these applications can step up the overall quality of social interaction through their different modalities. “Just like voice communication was a huge transformation after letters and telegrams. And the video was another big step in interactivity. You can use generative AI to analyse ML algorithms or to create a piece of music and make it sound like Mozart, who had many unfinished symphonies before he passed away. So, using that logic, you can even make music stylistically like maybe ‘Lord of the Rings’ or ‘Led Zeppelin’. Just like ChatGPT can be used to build narratives; in the future, generative AI can be used in other modes like audio” he explained.

All this is to say that the metaverse can finally become the fully immersive experience that it promised to be. Prajod further shared, “Unlike the text interface of chat apps and 2D interface of video conference apps, metaverse allows kinesthetic interactions like turning your head and moving your hands. This is a more immersive social experience compared to text or video”.

Adds great value to financial and educational sectors

Notably, it isn’t just for entertainment, there’s value to be found in the Metaverse through Generative AI for business and education as well. “This technology opens new opportunities not just in human interaction and entertainment but also in education. There is no need to put in any money in physically building classrooms and other facilities and scaling in these sectors can become far easier”, Prajod said.

Businesses such as banking, which are already leveraging the metaverse, can do more to help themselves. “Now, there’s a lot more you can do in terms of physical virtual interactions other than just automated chat, especially in the banking sector. There are virtual banks which eliminate the cost of physically going there or even the hassle of contacting someone at a call centre and waiting on phone lines.”

He adds, “Instead, even banks can invest their resources into other places. Even if you have a million customers, you can have a customer service representative for each of them because they’re just virtual avatars. It’s not just audio, there will be visual aspects which is a much fuller experience because you’d be able to see their expressions when they’re talking. This is valuable when you’re selling services to customers compared to calling them over the phone”.

Much like human customer support assistants, these bots can access internal data and respond to customer queries. Furthermore, there will no longer be a never-ending wait on phone lines when such virtual customer support assistants are deployed. These AI bots can also learn from the customer’s responses and get better at explaining and conversing based on each customer’s history.

“If I am calling a customer representative normally, a lot of the times I am on call with someone else the second time I call, so, the context is lost. But, on the metaverse that’s not a problem because the context can be retained when you’re talking to a chatbot”, Prajod stated.

But this isn’t the end of the positives. “There’s a lot of scopes, especially to create content for visually impaired users because of auto translation. You can also converse with people across the world in real-time using the auto-translate speech feature”, he further explains.

Other applications

In terms of applications, a digital twin is another space with underlying tech similar to the metaverse where generative AI can help. “Digital Twin technology needs high-resolution rendering and real-time data transfer. We are very excited about digital twins, which are replicas of certain processes or structures which can be connected to IoT (Internet of Things) endpoints. If you are creating a digital twin of, say, New York or Mumbai, you can show the virtual spaces of the roads and signals, and you can solve real-world problems of how to control the flow of traffic and showcase alternative routes of getting to places faster”, Vineet stated.

There are other challenges around the metaverse which generative AI can resolve. “The bandwidth needed for a customer to be able to experience the metaverse is a lot. Power and computing are considerable limiting factors currently. In this regard, generative AI can be used to re-generate the content on customers’ edge devices making it possible to show high-resolution content. Currently, most metaverses have a cartoonish look and a low resolution”, Prajod shared.

The post How Generative AI is Reshaping the Landscape of the Metaverse appeared first on Analytics India Magazine.

The music of the Riemann Hypothesis: Sound Generation in Python

Not long ago, I published an article entitled “The Sound that Data Makes”. The goal was turning data — random noise in this case — into music. The hope was that by “listening” to your data, you could gain a different kind of insights, not conveyed by visualizations or tabular summaries.

This article is a deeper dive on the subject. First, I illustrate how a turn the Riemann zeta function, at the core of the Riemann conjecture, into music. It constitutes an introduction to scientific programming, the MPmath library, and complex number algebra in Python. Of course this works with any other math function. Then I explain how to use the method on real data, create both a data sound track and a data video, and combine both.

Benefits of Data Sonification

Data visualizations offer colors and shapes, allowing you to summarize multiple dimensions in one picture. Data animations (videos) go one step further, adding a time dimension. You can find many on my YouTube channel. See example here. Then, sound adds multiple dimensions: amplitude, volume and frequency over time. Producing pleasant sound, with each musical note representing a multivariate data point, is equivalent to data binning or bukectization.

Stereo and the use of multiple musical instruments (synthesized) add more dimensions. Once you have a large database of data music, you can use it for generative AI: sound generation to mimic existing datasets. Of course, musical AI art is another application, all the way to creating synthetic movies.

Implementation

Data sonification is one the projects for participants in my GenAI certification program offered here. In the remaining of this article, I describe the various steps:

  • Creating the musical scale (the notes)
  • Creating and transforming the data
  • Plotting the sound waves
  • Producing the sound track

I also included the output sound file in the last section, for you to listen and share with your colleagues.

Musical Notes

The first step, after the imports, consists of creating a musical scale: in short, the notes. You need it if you want to create a pleasant melody. Without it, the sound will feel like noise.

import numpy as np  import matplotlib as mpl  import matplotlib.pyplot as plt  from scipy.io import wavfile  import mpmath    #-- Create the list of musical notes    scale=[]   for k in range(35, 65):       note=440*2**((k-49)/12)      if k%12 != 0 and k%12 != 2 and k%12 != 5 and k%12 != 7 and k%12 != 10:          scale.append(note) # add musical note (skip half tones)  n_notes = len(scale) # number of musical notes

Data production and transformation

The second step generates the data, and transforms it via rescaling, so that it can easily be turned into music. Here I use sampled values of the Dirichlet eta function (a sister of the Riemann zeta function), as input data. But you could use any real data instead. I transform the multivariate data into 3 features indexed by time: frequency (the pitch), volume also called amplitude, and the duration for each of the 300 musical notes corresponding to the data. Real and Imag are respectively the real and imaginary part of a complex number.

#-- Generate the data    n = 300  sigma = 0.5  min_t = 400000  max_t = 400020    def create_data(f, nobs, min_t, max_t, sigma):      z_real = []      z_imag = []      z_modulus = []      incr_t = (max_t - min_t) / nobs      for t in np.arange(min_t, max_t, incr_t):             if f == 'Zeta':              z = mpmath.zeta(complex(sigma, t))          elif f == 'Eta':              z = mpmath.altzeta(complex(sigma, t))          z_real.append(float(z.real))          z_imag.append(float(z.imag))          modulus = np.sqrt(z.real*z.real + z.imag*z.imag)          z_modulus.append(float(modulus))      return(z_real, z_imag, z_modulus)    (z_real, z_imag, z_modulus) = create_data('Eta', n, min_t, max_t, sigma)     size = len(z_real) # should be identical to nobs  x = np.arange(size)    # frequency of each note       y = z_real      min = np.min(y)  max = np.max(y)  yf = 0.999*n_notes*(y-min)/(max-min)      # duration of each note  z = z_imag    min = np.min(z)  max = np.max(z)  zf = 0.1 + 0.4*(z-min)/(max-min)     # volume of each note  v = z_modulus   min = np.min(v)  max = np.max(v)  vf = 500 + 2000*(1 - (v-min)/(max-min)) 

Plotting the sound waves

The next step plots the 3 values attached to each musical note, as 3 time series.

#-- plot data    mpl.rcParams['axes.linewidth'] = 0.3  fig, ax = plt.subplots()  ax.tick_params(axis='x', labelsize=7)  ax.tick_params(axis='y', labelsize=7)   plt.rcParams['axes.linewidth'] = 0.1  plt.plot(x, y, color='red', linewidth = 0.3)  plt.plot(x, z, color='blue', linewidth = 0.3)  plt.plot(x, v, color='green', linewidth = 0.3)  plt.legend(['frequency','duration','volume'], fontsize="7",       loc ="upper center", ncol=3)  plt.show()

Producing the sound track

Each wave corresponds to a musical note. You turn the concatenated waves into a wav file using the wavfile.write function from the Scipy library. Other than that, there is no special sound library involved here! Hard to make it easier.

#-- Turn the data into music    def get_sine_wave(frequency, duration, sample_rate=44100, amplitude=4096):      t = np.linspace(0, duration, int(sample_rate*duration))      wave = amplitude*np.sin(2*np.pi*frequency*t)      return wave    wave=[]  for t in x: # loop over dataset observations, create one note per observation      note = int(yf[t])      duration = zf[t]      frequency = scale[note]          volume = vf[t]  ## 2048      new_wave = get_sine_wave(frequency, duration = zf[t], amplitude = vf[t])      wave = np.concatenate((wave,new_wave))  wavfile.write('sound.wav', rate=44100, data=wave.astype(np.int16))

Results

The technical document detailing the project in question can be found here. It includes additional details on how to add an audio file such as below to a data video, as well as test datasets (from real life) to turn into sound. Perhaps the easiest way to add audio to a video is to turn the wav file into mp4 format, then use the Moviepy library to combine both. To listen to the generated music, click on the arrow in the box below.

The music of the Riemann Hypothesis

The figure below shows the frequency, duration and volume attached to each of the 300 musical notes in the wav file, prior to re-scaling. The volume is maximum each time the Riemann zeta function hits a zero on the critical line. This is one of the connections to the Riemann Hypothesis.

soundrhbig

About the Author

vgr2-1

Vincent Granville is a pioneering data scientist and machine learning expert, founder of MLTechniques.com and co-founder of Data Science Central (acquired by TechTarget in 2020), former VC-funded executive, author and patent owner. Vincent’s past corporate experience includes Visa, Wells Fargo, eBay, NBC, Microsoft, CNET, InfoSpace. Vincent is also a former post-doc at Cambridge University, and the National Institute of Statistical Sciences (NISS).

Vincent published in Journal of Number Theory, Journal of the Royal Statistical Society (Series B), and IEEE Transactions on Pattern Analysis and Machine Intelligence. He is also the author of “Intuitive Machine Learning and Explainable AI”, available here. He lives in Washington state, and enjoys doing research on stochastic processes, dynamical systems, experimental math and probabilistic number theory.

Data Scientist Survey: Do Tech Leaders Believe the AI Hype?

Generative AI image concept.
Image: SomYuZu/Shutterstock

According to Domino Data Lab’s survey from the REV 4 conference, 90% of data scientists think generative AI hype is justified. Respondents are professionals who are leading, developing and operating generative AI initiatives across Fortune 500 companies.

“The findings validate the incredible business potential of Generative AI and its expected near-term impact,” wrote Kjell Carlsson, Domino Data Lab’s head of data science strategy & evangelism, in a blog post. “However, it also confirms key challenges — governance, control, privacy and fairness — as well as the severe limitations of the current, commercially available Generative AI offerings.”

The San Francisco-based Domino Data Lab collected responses from 162 data science executives, data science team leaders, data science practitioners and IT platform owners. Some additional opinions in the report were sourced from Domino Data Lab customers.

Jump to:

  • 55% of data science professionals think AI will have a significant impact on business
  • Most data science leaders prefer to modify third-party AI
  • Governance and bias are the top barriers to AI adoption
  • Is generative AI approaching the peak of its hype?

55% of data science professionals think AI will have a significant impact on business

More than half (55%) of the data science professionals and IT platform owners surveyed said generative AI will have a significant impact on their business within the next one to two years. Additionally, almost half of the respondents (45%) believe the hype is only rising, expecting generative AI to have an even greater impact than today’s expectations suggest.

Other data from G2, EY and others show the same large impact of AI. In a recent survey of tech executives, CNBC found that AI is their top priority for tech spending over the next year, starting in June 2023; the second priority is cloud computing.

According to Statista, artificial intelligence startups (a category in which Statista includes machine learning, robotics, neural networks and language processing) received a total yearly investment of $5 billion from 2020 to 2022.

Most data science leaders prefer to modify third-party AI

Most (55%) of the data science professionals and IT platform owners Domino Data Lab surveyed prefer to use foundation models from large third parties like OpenAI, Microsoft or Google but create different experiences for their customers on top of the base model. Another 39% want to build their own proprietary generative AI from scratch. Just 6% want to use AI features solely planned and provided by independent software vendors and other third parties.

The respondents believe the biggest problems with commercially available generative AI, such as ChatGPT, are security (54%), reliability (44%) and IP protection (42%).

SEE: Learn what AI technologies Amazon just poured $100 million into. (TechRepublic)

These concerns mean that organizations need to invest in tools to make it easier to fine-tune generative AI models, as 41% of those surveyed plan to do. Some (35%) also plan to implement governance capabilities for tracking and managing the development of those AI models.

Governance and bias are the top barriers to AI adoption

There are still challenges facing generative AI adoption today. The data science professionals and IT platform owners surveyed said they foresee challenges around governance (57%), mitigating bias and ensuring fairness (51%) and control (49%), as well as finding employees with the skills for developing generative AI solutions (49%).

Data leakage is another problem cited by survey participants. Some are concerned about generative AI having low accuracy or leading to bad business decisions (35%) and budget overreach (33%).

Senior leadership in particular cited concerns about governing generative AI solutions generally (76%), as well as the reliability (76%) and security (71%) of solutions on the market today.

Is generative AI approaching the peak of its hype?

Other industry experts are warning the tech world to temper the hype.

“AI has great potential, but it is a huge high-risk bet, and a large percentage of your investment will likely go nowhere,” said Saurajit Kanungo, president of the consulting firm CG Infinity, in an email. “Only invest if you can measure the ROI in business terms – is it going to decrease costs or increase revenue?”

He points toward Gartner’s 2022 AI Hype Cycle graph, in which generative AI approaches the point labeled Peak of Inflated Expectations.

“I absolutely believe that AI (including generative AI) has the potential to drive value for every organization, big or small. However … I would advise executives to adopt AI as an evolution, not a revolution,” Kanungo said.

He finds the case for generative AI to be stronger than the case for the last hot technology investment trend: cryptocurrencies. “Cryptocurrencies require a whole new ecosystem or market to be made. Business cases to justify investing in generative AI in an organization are an easier challenge compared to making a whole market with cryptocurrencies,” Kanungo said.

Subscribe to the Innovation Insider Newsletter

Catch up on the latest tech innovations that are changing the world, including IoT, 5G, the latest about phones, security, smart cities, AI, robotics, and more.

Delivered Tuesdays and Fridays Sign up today

How Generative AI is a Game Changer for Cloud Security

An image representing cloud servers.
Image: issaronow/Adobe Stock

Cloud security and artificial intelligence have had a long-term partnership. For nearly a decade, AI has been used to identify threats and prioritize risks in the cloud through its pattern recognition capabilities and anomaly detection.

A lot has changed over the past 10 years, however. With more people and organizations migrating to cloud applications, threat actors have followed along, seeing cloud applications as a prime target.

Cloud security is more important than ever to an organization’s cybersecurity maturity, and AI’s integration into cloud security tools is a vital layer of defense against an expanding cloud-based threat landscape. Now, one of the biggest game changers for cloud security is generative AI, according to Google.

“Generative AI has the potential to reduce the toil of repetitive tasks that plague security teams, like aggregating and enriching data from a multitude of sources to gain a more complete understanding of risks and where to focus,” Sunil Potti, VP/GM of Google Cloud Security, said in a recent blog post as part of the Google Cloud Security Summit in June.

Google’s own cloud security efforts include AI Workbench, where AI will be used to address and prevent emerging threats, eliminate the toil of threat fatigue caused by alert overload and close the talent gap.

SEE: Here is a deep dive into how generative AI works.

Jump to:

  • Building on AI’s role in cloud security
  • How generative AI impacts cloud security

Building on AI’s role in cloud security

Traditionally, AI has been used to detect and remediate hundreds of threats in a matter of seconds.

Generative AI takes AI to a new level because it focuses on creating new data rather than just analyzing existing data. “[Generative AI] enables the development of realistic synthetic data, which can be used for training and testing security models without exposing sensitive information,” Bob Janssen, vice president of engineering and head of innovation at Delinea, told TechRepublic.

Generative AI is a game changer in how organizations address cloud security, Janssen said. “It provides realistic synthetic data for testing, simulates sophisticated attack scenarios and minimizes the risk of exposing sensitive information during development, enhancing overall security measures,” he added.

How generative AI impacts cloud security

What makes generative AI stand apart from the AI models used currently in cloud security is its ability to summarize, classify and generate information. With proper training, it can reason about specialized data and provide natural-language, conversational interactions that facilitate workflows more quickly than flat interfaces in typical security tools.

“These characteristics applied to cloud security enable customers to identify and prioritize the most relevant risks to their unique environment or regulatory requirements; to quickly generate the queries and detections required to consistently monitor for threats,” Potti said. Generative AI can be used to interact in natural language with an assistive experience that can guide customers to their ideal outcomes.

At Google, for example, cloud security is being “supercharged” with generative AI so customers can search petabytes of event data using natural language instead of writing custom queries. Another feature provides a human-readable explanation of potential attack paths and steps to remediate.

“So with AI, it’s still early days,” Potti said, “but we’re leveraging these superpowers to achieve security outcomes like early breach detection or instant classification of potential malware.”

Read next: A fundamental guide to cloud security.

Subscribe to the Cybersecurity Insider Newsletter

Strengthen your organization's IT security defenses by keeping abreast of the latest cybersecurity news, solutions, and best practices.

Delivered Tuesdays and Thursdays Sign up today

Tour de France adds ChatGPT and digital twin tech. Here’s how and why

Bikers racing at the Tour de France

The Tour de France is one the most prestigious bicycle races in the world, attracting millions of viewers every year. This year the viewing experience is getting a massive upgrade by leveraging the latest tech including IoT, edge computing, and generative AI.

NTT, the IT and services company, has been the Tour de France's partner for the last nine years. This year it is raising the bar by creating "the world's largest connected stadium" and incorporating ChatGPT.

Also: Data and digital twins are changing golf. They might fix your golf swing, too

The "connected stadium" sets up a digital twin of the race that will use real-time data to digitally replicate all aspects of the race, allowing the Tour de France to be an entirely digitized event.

The digital twin will be used to help give the event's organizer, the Amaury Sports Organization, a better understanding of what is going on at the event, helping to ensure smooth operations.

NTT will gather data on the bikes using geolocation and tiny sensors mounted underneath the saddle of each bike.

Also: OpenAI sued for 'stealing' data from the public to train ChatGPT

Using this technology, the NTT will receive a constant stream of latitude, longitude, speed, and other data from those sensors. This data will be transmitted over radio networks to race vehicles before a microwave signal will carry the data to the end of the race. There, an edge-computing device will run a "containerized version" of a real-time analytics platform, according to the release.

In addition, ChatGPT will be integrated into NTT's AI-driven Digital Human solution to provide detailed, relevant race information to fans.

Also: This global index ranks which nations dominate AI development

The integration, called Marianne, will use "machine learning, speech recognition, natural language processing, and conversational AI" to provide fans with comprehensive race information, according to the release.

All of these advancements will also be applied to the Tour de France Femmes avec Zwift, the women's counterpart race that was inaugurated last year.

Artificial Intelligence

Microsoft brings new AI-powered shopping tools to Bing and Edge

Microsoft brings new AI-powered shopping tools to Bing and Edge Frederic Lardinois @fredericl / 15 hours

Microsoft today announced a slew of new AI-powered shopping tools for its new Bing search engine and the Bing AI chatbot in the Edge sidebar. While a lot of the shopping features that Microsoft built into Edge over the years aren’t exactly fan favorites, this new set of tools actually looks useful.

Microsoft will now, for example, use Bing’s GPT-powered AI capabilities to automatically generate buying guides when you use a query like “college supplies.” It will automatically aggregate products in each category it comes up with, list their specs so you can compare similar items and, of course, tell you where to buy them (with Microsoft getting an affiliate fee when you buy).

Given that there is an entire ecosystem of sites that focus on these kinds of buying guides, it will be interesting to see how they will react to this change (and if Microsoft is doing this in Bing, Google and others will surely follow suit). Nobody is going to bemoan the end of the low-quality, SEO-optimized shopping content you often find when you try to compare different products, but this has the potential to hurt legitimate editorial operations, too.

The new buying guides in Bing are now available in the U.S. and the worldwide rollout for buying guides in Edge is starting today.

Image Credits: Microsoft

Another new feature Microsoft is launching worldwide today is AI-generated review summaries. As the name implies, this feature sums up online reviews of products. To use this, you simply ask Bing Chat in Edge to summarize what people are saying about a given product and it will generate a quick overview for you.

Also new is Price Match, a tool that will help you request a price match from a retailer, even after the price drops. “We’ve partnered with top U.S. retailers with existing price match policies and will be adding more over time,” Microsoft says (though it didn’t specify which retailers it is working with).

Image Credits: Microsoft

Microsoft unveils first professional certificate for generative AI skills

woman using laptop

Microsoft has added a new type of training and certification to its lineup that taps into the latest interest and excitement around AI. Part of the company's Skills for Jobs program, the new professional certificate on Generative AI will be given to anyone who takes the free classes on AI and passes the required exam.

Available through LinkedIn Learning, the Career Essentials in Generative AI program offers a free course on generative AI, a technology so named because it can generate different kinds of content. This form of AI has created a huge buzz due to such companies as OpenAI, Microsoft, and Google launching their own AI chatbots that people can use to ask questions, get information, and create content.

Also: ChatGPT vs. Bing AI: Which AI chatbot is better for you?

With its newfound popularity, AI has been seeping into more products, services, and organizations. This shift means that more workers will need to understand how to use AI, a realization that prompted Microsoft to devise the new certificate.

In an article published on LinkedIn, Kate Behncken, Corporate VP for Microsoft Philanthropies, called the initiative the first professional certificate on generative AI. Through the five classes, people will start by learning the basic concepts of AI and then advance into AI frameworks. Passing the assessment then entitles someone to the Career Essentials certificate.

The course includes the following individual sessions:

  1. What Is Generative AI? — Learn about the basics of generative AI, including its history, popular models, how it works, ethical implications, and much more.
  2. Generative AI: The Evolution of Thoughtful Online Search — Explore the distinctions between search engines and reasoning engines, with a focus on learning thoughtful search strategies in the world of generative AI.
  3. Streamlining Your Work with Microsoft Bing Chat — Learn how to leverage Microsoft Bing Chat to streamline and automate your work.
  4. Ethics in the Age of Generative AI — Learn why ethical considerations are a critical part of the generative AI creation and deployment process and explore ways to address these ethical challenges.
  5. Introduction to Artificial Intelligence — Get a simplified overview of the top tools in artificial intelligence.

Currently offered in English, the certificate will be available in Spanish, Portuguese, French, German, Simplified Chinese, and Japanese in the coming months. Following Microsoft's six other Career Essentials Professional Certificates in the Skills for Jobs program, the AI classes will be unlocked and free through 2025.

Beyond the certificate-driven training in AI, Behncken said that Microsoft will kick off a toolkit for teachers and trainers who provide training to different people and communities. The toolkit will contain downloadable content for trainers on the practical uses of AI as well as an AI course designed for educators.

Also: This global index ranks which nations dominate AI development

Plus, Microsoft is launching a couple of challenges aimed at fostering learning in AI.

Starting July 17, its Learn AI Skills challenge is designed to teach people AI skills using Microsoft products. The company will also team up with GitHub and data.org on a Generative AI Skills Grant challenge, an open grant program geared toward nonprofit organizations, social enterprises, and educational or research institutions focused on implementing AI for historically marginalized populations around the world.

Based on a survey for Microsoft's recent Work Trend Index, 62% of the respondents said that they spend too much time searching for information in a typical workday. And though almost half said they're worried about AI potentially replacing their jobs, 70% said that they would offload as much work as possible to AI to ease their workloads.

More on AI tools

Speakeasy is using AI to automate API creation and distribution

Speakeasy is using AI to automate API creation and distribution Ron Miller 12 hours

Just about every developer wants to create APIs to help other companies connect to their services more easily, but creating and documenting an API is a time-consuming process. Speakeasy, an early-stage startup, wants to make that an easier and more automated set of activities.

Today, the company emerged from stealth with a $7.6 million seed investment.

Speakeasy co-founder and CEO Sagar Batchu describes his startup as an API infrastructure company, and that means it’s building tools to make it easier to create and distribute APIs, something that is near and dear to him as a developer himself. “We’ve started by working on an important problem to me, one that I’ve faced a lot myself as a developer, which is really dramatically simplifying how developers are able to ship APIs to end users,” Batchu told TechCrunch.

He sees APIs giving developers a kind of superpower. “As developers, APIs allow us to take advantage of another company’s capabilities. And so making it really easy to ship those APIs to developers means that we can really help companies accelerate how their products are adopted, as well as we reduce the burden on developers when they integrate with those APIs,” he said.

While the goal is to build a platform of features eventually to help with that mission, the company is starting with two tools, one to help developers create the APIs, and one to help their users implement them more easily.

The first piece is called Managed SDKs. Developers provide an Open API spec, and Speakeasy uses AI to help build a complete API for you along with the necessary documentation, reducing the amount of time it takes to complete a task like this to minutes.

“Speakeasy uses AI to validate and enhance the spec, creates SDKs in the most popular languages, and publishes automatically to package managers. It takes minutes to set up, and SDKs are updated every time the spec changes – saving developers significant time,” the company explained.

The second piece is designed to help the developer end user implement that API without having to worry about the underlying infrastructure, creating a package using HashiCorp’s popular Terraform tool.

“With Speakeasy, API producers can, for the first time, easily create and maintain Terraform providers from an OpenAPI specification – dramatically reducing engineering burden, while unlocking an entirely new developer community,” the company said.

The startup currently has nine employees, but is hiring engineers to help build out the platform further. As he builds the company, Batchu believes being remote will help him find a more diverse workforce. “So first of all, I think about how adopting a remote-friendly hiring philosophy means that we’ll be able to access more places, more diverse communities. It’s definitely something really important to us. And as we move forward, we will be looking to hire great talent from everywhere,” he said.

Today’s round was led by GV with participation from Quiet Capital, Flex Capital, StoryHouse Ventures and Firestreak Ventures. Last year the company raised an additional $3.3 million in a pre-seed round led by Quiet Capital with participation from a host of prominent industry angel investors.

The Managed SDK piece is generally available starting today. The Terraform piece is available in beta.

Smart waste management solutions that are revolutionizing the industry

save planet in Hands, Concept day earth, Save the world, save environment, Eco friendly, green company culture concept. Carbon neutral and net zero target. Sustainable environment and business.

The start of ‘smartification’ is merging itself into the various prospects of sustainability that entail smart waste management solutions. With the speedy availability and discovery of smart waste management technologies, waste gathering is also developing into a greener and much more efficient economic system. Prominent tech implementations include robots, sensor technology, and many more. Local authorities are also constantly taking leverage of smart solutions to avoid planet Earth from turning into a trash planet.

Introduction

By 2050, global waste is projected to reach 4 billion tons, increasing what it was in 2016. The fast upsurge ties back to increasing urban populations and growth in consumer culture over the past few years, neither of which is decreasing any time soon. For decreasing the strain this is enforcing on the surrounding and collecting waste services, several communities over the globe are moving towards smart waste management solutions and technologies.

What is smart waste management?

Smart waste operation implies any system that incorporates technology to make trash collection more efficient, cost-effective, and environmentally friendly. Most of these systems are implemented with the Internet of Things (IoT), an observation technology that gathers and tracks real-time records, for helping in optimizing waste collection and goad future innovation.

Innovative technologies revolutionizing waste management

The following technologies are combining IoT data analytics with conventional solutions to facilitate identifying challenges and enhance as they go.

Smart waste bins

When left to their own devices, people don’t always care about distinguishing their waste into labeled waste separation or recycling bins. To help in reducing improper recycling sorting, Polish company Bin-e built a smart waste bin that incorporates artificial intelligence-based object recognition to automatically sort recyclables into separate segments. After categorizing, the machine helps in pressing the waste and monitors how full each bin is.

Moreover, smart waste bins are helping in taking manual mistakes out of the starting distinguishing process, building material processing quicker and simpler for recycling installation. This can decrease waste management costs by as much as 80% and drastically help in improving employee skillfulness.

Waste level sensors

Weekly services have been around for years now, however, they aren’t always the most efficient choice.

Furthermore, to help in decreasing unnecessary trips to and from landfills, organizations and communities are installing waste-level sensors in bins or dumpsters of any size. These devices help in collecting and storing data on fill levels, hence, enabling collection services to project how often bins are required to be emptied. Also, this aids in preventing public containers from overflowing and making the closed area dirty.

Garbage truck weighing mechanisms

Weighing mechanisms installed in garbage trucks, like waste level sensors, can assist in predicting fill levels and mitigating collection trips. On the other hand, they perform this by calculating and storing the weight of waste containers, then utilizing the data to anticipate fill levels over time. Furthermore, cities can integrate this technology to more precisely forecast how often they need to send their trucks out and mitigate annual collection costs.

Pneumatic waste pipes

Since the number of people is increasing in urban cities, so does the necessity for smart waste management solutions that are able to adapt to increasing numbers of trash. Whereas, some cities are taking on this situation by installing pneumatic waste disposal bins connected to a series of underground pipes. Further, trash moves through the pipes to a waste collection plant where it can be classified or pulled away. This system helps in eradicating the demand for old-fashioned waste collection, mitigates energy costs, and boosts overall efficiency.

Solar-powered trash compactors

In an attempt to boost waste collection efficiency and reduce trips to and from the dump, manufacturer Ecube Labs have built a solar-powered trash compactor that can keep up to five times more than conventional trash bins. These machines are operated by compressing trash as it gathers to maximize bin capacity, and they collect and send data on fill and collection times to aid in streamlining the collection process.

Recycling apps

Classifying contaminated waste is one of the toughest challenges for recycling centers. More so, in an attempt for mitigating unrecyclable materials entering these centers, companies are developing apps like RecycleNation and iRecycle that are building recycling easier for individuals. These apps help in providing users with data on recycling rates and center locations, and their extensive lists of materials facilitate users in determining which items can be recycled.

Final Thoughts

The idea of constructing smart waste management services that are appropriate for both now and in the future is to implement a data-driven approach into the way we are managing garbage.

With sensors, intelligent routing, digital platforms, and container monitoring, we have all the solutions we are required in order to boost our waste management system.