"Nothing is certain," Benjamin Franklin once said, "except death and taxes." With unwavering certainty, I can say that artificial intelligence will be entrenched in our future work lives — it's only a matter of who embraces it and how. Oracle, for one, is embracing it.
The company is adding generative AI capabilities to its Oracle Fusion Cloud Human Capital Management (HCM) system, supported by the Oracle Cloud Infrastructure. According to Oracle, the new generative AI-powered features will bring improved job posting, search, and hiring processes for HR professionals and job applicants alike.
Also: Nvidia teams up with Snowflake for large language model AI
"Generative AI is the future of workplace technology with untapped potential to transform HR processes," said Kim Kohlman, vice president of HCM Operations at Hearst, an Oracle customer. "We anticipate that these improvements with generative AI will allow our teams to focus their efforts on increasing productivity and driving meaningful business value."
Oracle adds that the generative AI capabilities will help HR employees generate customized text for job descriptions specific to the position and company, write requirements for job postings, summarize employee performance data from peers and managers for reviews, and generate suggestions tailored to the company culture.
Also: These two AI models claim to be better than ChatGPT. Here's what we know
"With the ability to summarize, author, and recommend content, generative AI helps to reduce friction as employees complete important HR functions," said Chris Leone, executive vice president of applications development at Oracle Cloud HCM.
"For example, with the new embedded generative AI capabilities in Oracle Cloud HCM, our customers will be able to take advantage of large language models to drastically reduce the time required to complete tasks, improve the employee experience, enhance the accuracy of workforce insights, and ultimately increase business value," Leone added.
Also: AI ethics toolkit updated to include more assessment components
Oracle is incorporating generative AI into HCM to bolster its existing AI capabilities; it is unclear whether the feature could come to replace any HR employees at some point. However, the company didn't make any statements regarding that question.
Celestial AI raises $100M to transfer data using light-based interconnects Kyle Wiggers 8 hours
David Lazovsky and Preet Virk, technologists with backgrounds in semiconductor engineering and photonics, came to the joint realization several years ago that AI and machine learning workloads would quickly encounter a “data movement” problem. Increasingly, they predicted, it would become challenging to move data to and from compute hardware as AI models scaled past what could be kept on the die of any one memory chip.
Their solution — architected by Phil Winterbottom, previously a researcher at the distinguished Bell Labs — was an optical interconnect technology for compute-to-compute, compute-to-memory and on-chip data transmission. Along with Winterbottom, Lazovsky and Virk founded a startup, Celestial AI, to commercialize the tech. And now, that startup is attracting big backers.
Celestial AI today announced that it raised $100 million in a Series B round led by by IAG Capital Partners, Koch Disruptive Technologies and Temasek’s Xora Innovation fund. The tranche, which brings Celestial AI’s total raised to more than $165 million, will be used to support the production of Celestial’s photonics platform by expanding the company’s engineering, sales and technical marketing departments, according to CEO Lazovsky.
Celestial has around 100 employees at present — a number that Lazovsky expects will grow to 130 by the end of the year.
“Today, compute and memory are closely coupled. The only way to add more high bandwidth memory is to add more compute, whether the additional compute is required or not,” Lazovsky told TechCrunch via email. “Celestial’s tech enables memory disaggregation.”
In a data center, memory is often one of the most expensive resources — in part because it’s not always used efficiently. Because memory is tied to compute, it’s challenging — and sometimes impossible, due to bandwidth constraints and sky-high latency — for operators to “disaggregate” and pool the memory across hardware within the data center.
According to an internal Microsoft study, up to 25% of memory in Azure is “stranded,” or left over, after the servers’ cores have been rented to virtual machines. Reducing this stranded memory could cut data center costs by 4% to 5%, the company estimated — potentially significant savings in the context of a multibillion-dollar operation.
Celestial — which began as a portfolio company of The Engine, the VC firm spun out of MIT in 2016 — developed an ostensible solution in its photonics-based architecture, which scales across multiple-chip systems. Using light to transfer data, Celestial’s tech can beam information both within chips and chip-to-chip, making both memory and compute available for AI — and other — workloads.
Image Credits: Celestial
Celestial also claims that its tech can reduce the amount of electricity necessary for data movement, indirectly boosting a chip’s performance. Typically, chips devote a portion of the electricity they draw to data movement between their circuits, which takes away from the electricity that the chip can direct to computing tasks. Celestial’s photonics reduce the power required for data movement, allowing a chip to — at least in theory — increase its compute power.
Celestial’s photonics tech, which is compatible with most industry interconnect standards (e.g. CXL, PCIe), delivers 25x higher bandwidth and 10x lower latency and power consumption than optical alternatives, the company asserts.
“With the growth in AI , especially large language models (LLMs) and recommendation engine workloads, there is a shift towards accelerated compute,” Lazovsky said. “The key problem going forward is memory capacity, memory bandwidth and data movement — i.e. chip-to-chip interconnectivity — which is what we are addressing with Celestial’s photonic fabric.”
Celestial is offering its interconnect product through a licensing program, and says that it’s engaged with several “tier-one” customer including hyperscalers and processor and memory companies.
The interconnect product appears to be priority number one for Celestial. Celestial sells its own AI accelerator chip, dubbed Orion, built on the company’s photonics architecture. But as investors told TechCrunch in a recent piece for TC+, AI photonics chips have yet to overcome engineering challenges that would make them practical at scale. Unless Celestial stumbled upon breakthroughs in the areas of data-to-analog conversion and signal regeneration — top stumbling blocks for today’s photonics chips — it’s unlikely that Orion is much further along than the competition.
Chip aside, Celestial has a number of competitors in a photonic integrated circuit market that could be worth $26.42 billion by 2027.
Ayar Labs, which makes chip solutions based on optical networking principles, has raised over $200 million in venture capital since its founding in 2015. Ravonus, another rival, recently landed a $73.9 million investment.
There could be consolidation ahead in the broader optical interconnection space, though. Around three years ago, Marvell bought Inphi, an optical networking specialist, for $10 billion. After a period of quiet, Microsoft last year acquired Lumenisity, a startup developing high-speed optical cables for data center and carrier networks.
Both Inphi and Luminensity were targeting different use cases with their tech. But the enthusiasm from Big Tech around optics and photonics is worth making a note of.
“Our photonics technology is truly differentiated and is unique with superior characteristics,” Lazovsky said. “Given the growth in generative AI workloads due to LLMs and the pressures it puts on current data center architectures, demand is increasing rapidly for optical connectivity to support the transition from general computing data center infrastructure to accelerating computing.”
Samsung Catalyst, Smart Global Holdings, Porsche Automobil Holding SE, The Engine Fund, imec.xpand, M Ventures and Tyche Partners also participated in Celestial’s Series B.
“Anxiety or Enthusiasm?” – asked Emily Chang at the Bloomberg Technology Summit.
“You need to have both – the thoughtfulness, the understanding, the nuance, and the tension between the two exists everywhere,” said Sam Altman, sharing his experience after travelling the Far East, speaking to users, developers and world leaders.
Altman’s observation pretty much sums up generative AI adoption in enterprises, globally. It has been creating quite a bit of stir for enterprises, fuelling both enthusiasm and anxiety.
AIM got in touch with technology heads and CXOs of leading companies across industries, alongside tracking the trends, to understand their adoption strategies, and the answers were quite surprising. Unlike previous AI adoption, which were mostly top-down, generative AI adoption seems to be being pushed in all the directions – i.e. top-down as well as bottom-up.
Bottom-Up Approach
Ironically, this is also the first time in the history of AI adoption where the bottom-up approach is gaining immense traction, where employees and teams come together, identify opportunities with use cases and PoCs, and are later supported by top management.
This is mostly driven by enthusiasm.
Most of the startups and growing companies fall under this category. These organisations are mostly experimenting with generative AI to address needs that benefit the company and their customers. Hopefully, along the way it might improve efficiency and increase productivity. Some of the examples include Swiggy, MakeMyTrip, Tech Mahindra and others.
Amitkumar Banka, head growth marketing at Swiggy told AIM that the company is using generative AI to create customised food images based on specific requirements on their platform, and this is helping them serve millions of customers.
“From a Swiggy perspective, it is a bottoms up approach. Most teams, including analytics, product, design, corporate strategy have come together to form a strong point of view in terms of how Swiggy should take generative AI to the next level. Each person and team are coming up with their own use cases to take advantage of generative AI.”
Narasimha Medeme, VP head data science at MakeMyTrip, have already launched conversational bots for customers and are now working on other features such as speech-to-text, and including Indian languages in their systems.
Top-Down Approach
Top-down approach has always been a go-to strategy for most companies, as it is faster and easier to consult and adopt. For instance, of late, a lot of IT companies – the likes of TCS, Infosys, Wipro, Cognizant, Accenture, and others, are partnering with Microsoft and Google to unleash their generative AI initiatives, alongside other technology enablers like Oracle, SAP, Salesforce and Zoho.
The adoption trickles down from the top. Qlik SVP Geoff Thomas believes that if a company wants to adopt a data culture and become a data-driven company, it would require strong sponsorship and support from the highest level of leadership. “It is often driven from the CEO, from top to bottom.”
But, there is a flipside to this, this might be exciting for people on the top or leadership team to improve their efficiency and productivity, but it often fuels anxiety among employees and teams, particularly those who have been familiar with traditional methods and productivity tools.
This also requires additional push from the organisation to offer training and certification programmes. Recently, Infosys announced a comprehensive and free AI certification training programme.
Hybrid Approach
Here, most companies are following both top-down and bottom up approaches, alongside setting up a centre of excellence to fuel generative AI use cases. Some of the examples include HCL Tech, Infosys, Zoho, and others.
Infosys recently unveiled Topaz which is a set of solutions and platforms using generative technologies with 12000 AI use cases, and 150+ pretrained AI models.
Wipro is also dwelling on a hybrid route. The company has partnered with Google Cloud to leverage its generative AI tools. Wipro will also integrate them with their own AI models, and pre-built industry solutions.
So, What’s the Best Approach?
In conversation with AIM, Ramprakash Ramamoorthy, Director – AI Research, Zoho Corp. vouches for hybrid approach, where he believes that the right mix of narrow and large models will be a win-win for companies and their customers.
“Narrow AI, where one model is trained to do one task based on past experiences will thrive, helping companies automate redundant tasks. Whereas, LLM-based generative AI will augment these capabilities by offering seamless availability of information from across sources.”
Zoho has a suite of 13 applications that is integrated with ChatGPT. “ Our focus right now will be to tightly integrate our AI stack across our product suites and also, in parallel, build in-house LLMs for businesses to provide seamless user experience in our offerings,” said Ramprakash.
Each industry has its way of implementing generative AI in their functions. Deepika Kaushal, Deputy Vice President at Piramal Finance, confirmed to AIM that their company is still identifying use cases for generative AI applications and it is better for them to learn from the experts and later in future learn the capabilities to build in-house.
On the other hand, Vivek Sahabadi, Head of Data Analytics at Navi, is of the opinion that the kind of data a company handles decides the right approach. “For fintech companies, where user data is critical, building their own models makes sense, whereas, in industries such as food tech, external models can be used.”
All in all, it becomes important for companies to consider factors such as costing, expertise, data security, and in-house infra capabilities, before diving into the adoption of generative AI. Most importantly, the usefulness or second order understanding of generative AI should be established. One must not recklessly rush into it, irrespective of whether excitement or anxiety is pushing them towards it.
The post How Companies are Botching Generative AI appeared first on Analytics India Magazine.
Amazon Web Services has announced the launch of a $100 million AWS Generative AI Innovation Center. The new program aims to help customers successfully build and deploy generative artificial intelligence offerings. In addition, it will connect AWS AI and machine learning experts with customers around the globe to help them envision, design and launch new generative AI products, services and processes, the company said in a press release.
Amazon’s global customers “are hungry for guidance about how to get started quickly and securely with generative AI,” said Matt Garman, senior vice president of sales, marketing and global services at AWS, in a statement. “The Generative AI Innovation Center is part of our goal to help every organization leverage AI by providing flexible and cost-effective generative AI services for the enterprise, alongside our team of generative AI experts to take advantage of all this new technology has to offer.”
AWS is leveraging its global community of partners to work with business leaders in all industries to help them maximize the impact of generative AI in their organizations, Garman said.
Jump to:
What the AWS Generative AI center offers
How customers plan to use the AWS Generative AI center
Tips for applying generative AI responsibly
Accenture is investing in generative AI
What the AWS Generative AI center offers
The AWS Generative AI Innovation Center has assembled a team of strategists, data scientists, engineers and solutions architects who will work alongside customers throughout the process of building generative AI systems. These experts will provide “a wide range of services from advisory functions, such as exploring the best foundation models to meet business objectives, to hands-on engagements, such as fine-tuning foundation models, to meet industry-specific needs,” Sri Elaprolu, head of the Generative AI Innovation Center and senior manager of data science and machine learning, told TechRepublic.
“While we see machine learning happening everywhere, there are specific industries that we are focused on first,” Elaprolu noted, including financial services, healthcare and life sciences, automotive and manufacturing, media and entertainment, telecom and energy.
For example, healthcare and life sciences companies can pursue ways to accelerate drug research and discovery, he said. “It’ll help healthcare companies finally deliver on the promise of personalized medicine for both doctors and patients. They will be able to use generative AI to answer questions and provide diagnoses extracted from integrated health records and research databases.” The goal is to save lives by empowering researchers to create new protein sequences, antibodies and enzymes for vaccines, and gene therapies based on models trained to use libraries of science data, Elaprolu added.
The Generative AI Innovation Center will work with customers using Amazon’s “working backwards process,” Elaprolu said. “First, we work with the customers to identify the business opportunities and the potential generative AI use cases. Then, our team helps them plan and develop proof-of-concepts, and lastly, we help them prepare for production launch at scale.”
Specifically, the generative AI professionals will help customers with brainstorming and problem formulation, Elaprolu said. “We’ll support them in working through challenges involved and defining a clear path to success. Our goal here is to help our customers understand how to select the right generative AI use cases to experiment with, improve accuracy in foundation and large language models, and strategize how to fine-tune and customize these models for use cases.”
There will also be free workshops, engagements and training offered to customers to come up with use cases that will create the greatest value for their businesses based on best practices and industry expertise, according to the press release. The generative AI professionals from AWS and the AWS Partner Network will help select the right models, define paths to navigate technical or business challenges, develop proofs of concepts and make plans for launching offerings at scale.
SEE: Become your company’s certified AWS expert with these courses (TechRepublic Academy)
How customers plan to use the AWS Generative AI center
Sales-enablement platform Highspot, travel guidebook publisher Lonely Planet and customer engagement platform Twilio are among the first companies that will work with the innovation center to develop generative AI offerings, the company said.
To help customers reduce costs, the Generative AI Innovation Center will provide guidance on how to apply generative AI responsibly and optimize machine learning operations. The team will offer strategies, tools and assistance to help customers use AWS generative AI services, according to the AWS press release.
The idea is for customers to be able to train and run their models using high-performance infrastructure. Additionally, customers can build, train and deploy their own models with Amazon technologies, AWS said.
“The potential generative AI brings is huge, and at Highspot we’re leveraging it to transform sales enablement and continue leveling up the value we give our customers,” said Kurt Berglund, vice president of science at Highspot, in a statement. The innovation center is providing “creative guidance for some of the most complex challenges and opportunities involved in bringing generative AI workloads to life at scale,” Berglund added.
Chris Whyde, senior vice president of engineering and data science at Lonely Planet, said in a statement: “The AWS Generative AI Innovation Center, paired with expert-driven advice and Lonely Planet’s award-winning content, will enable us to provide more personalized travel recommendations, making travel more accessible for those around the world.”
Twilio’s goal with its Twilio CustomerAI “is to empower businesses to leverage both generative and predictive intelligence capabilities that help them better understand and provide deeper value to their customers,” said Kathryn Murphy, senior vice president of product management at Twilio, in a statement.
Tips for applying generative AI responsibly
Elaprolu offered tips to responsibly apply generative AI that the center will promote: develop methods to detect biases and explain model predictions, and monitor and implement human review of the generative AI’s output.
In addition, AWS has been providing tools to help customers with responsible AI, such as Amazon Sagemaker Clarify and AWS AI Service Cards, he said. The team is creating “an operational approach” that encompasses people, processes and technology to maximize benefit and minimizes risk. “Additionally, continuous education on the latest developments in AI/ML is an important part of responsible use as the technology constantly changes,” Elaprolu noted.
Accenture is investing in generative AI
In related news, Accenture announced on June 21, 2023, that it will expand its partnership with AWS to help its customers reinvent their business processes with AI. The news was part of Accenture’s recently announced $3 billion investment in its data and AI practice.
Accenture will work with AWS to invest in developing new industry-specific and cross-industry offerings, pre-built models and training to help clients utilize large language models and generative AI and move from experimentation to adoption. The company said it will help clients deploy Amazon Bedrock to use Amazon SageMaker, along with other AWS ML technologies.
The investments will span across industries including financial services, life sciences, customer support and supply chain, Accenture said.
Subscribe to the Innovation Insider Newsletter
Catch up on the latest tech innovations that are changing the world, including IoT, 5G, the latest about phones, security, smart cities, AI, robotics, and more.
As we enter the age of big data and artificial intelligence, the tools and resources available to data scientists and researchers continue to grow exponentially. Every tool, whether it's wizards, code generators, or AI-powered data extraction utilities, promises to ease the burden of repetitive tasks, providing insights value and improve the overall efficiency and productivity of our workflow. KDnuggets' latest cheat sheet, AI Chrome Extensions for Data Scientists Cheat Sheet, presents you with an impressive array of advanced tools and resources designed to support your data science game. They cover a wide range of applications, from understanding complex scientific literature to writing high-quality manuscripts and more.
DOWNLOAD NOW
The selection of tools presented on this cheat sheet includes SciSpace Copilot, an AI-powered research assistant designed to help you understand the text, math, and tables in scientific literature. Fireflies, an AI assistant powered by GPT-4, is also featured. This revolutionary tool can surf the web and summarize various types of content, including articles, YouTube videos, and emails, with human-like efficiency. Also highlighted is AIPRM, a resource that provides a list of curated prompt templates for several fields such as DevOps, Generative AI, Software engineering, and productivity. Other tools like Originality.AI, CodeSquire.AI, Codeium, and Data Scraper, each with unique capabilities and applications, are presented to enhance your research and coding skills.
Discover powerful Chrome extensions for data scientists that can help with research, summarizing papers, generating ChatGPT prompts, detecting AI, coding autopilot, data scraping, web surfing, and content generation.
In addition to these, we also have Grammarly GO, which generates high-quality drafts, outlines, and revisions by understanding your context, preferences, and goals. Sider is another versatile tool that allows you to manipulate any text as per your requirements. Finally, CatalyzeX and 10AI, powerful Chrome extensions for data scientists, are listed for their potential to facilitate research, summarizing papers, detecting AI, coding autopilot, and data scraping.
DOWNLOAD NOW
Stay tuned to KDnuggets for more cheat sheets and additional learning resources, keeping you at the cutting edge of the evolving data science landscape.
More On This Topic
Must-have Chrome Extensions For Machine Learning Engineers And Data…
The latest benchmark tests of chip speed in training neural networks was released on Tuesday by the MLCommons, an industry consortium. As in past years, Nvidia scored top marks across the board in the MLPerf tests.
With competitors Google, Graphcore and Advanced Micro Devices not submitting entries this time around, Nvidia's dominance across all eight tests was complete.
Also: AI will change software development in massive ways
However, Intel's Habana business brought meaningful competition with its Guadi2 chip, and the company pledges to beat Nvidia's top-of-the-line H100 GPU by this fall.
The benchmark test, Training version 3.0, reports how many minutes it takes to tune the neural "weights", or parameters, until the computer program achieves a required minimum accuracy on a given task, a process referred to as "training" a neural network.
Along with training on server computers, MLCommons released a companion benchmark test, MLPerf Tiny version 1.1, which measures training performance on very-low-powered devices.
The main Training 3.0 test, which totals eight separate tasks, records the time to tune a neural network by having its settings refined in multiple experiments. It is one half of neural network performance, the other half being so-called inference, where the finished neural network makes predictions as it receives new data. Inference is covered in separate releases from MLCommons.
Also: Nvidia, Dell, and Qualcomm speed up AI results in latest benchmark tests
Nvidia took the top spot in all eight tests, with the lowest time to train. Two new tasks were added. One is testing the GPT-3 large language model (LLM) made by OpenAI. Generative AI using LLMS has become a craze due to the popularity of OpenAI's ChatGPT program, which is built upon the same LLM. In the GPT-3 task, Nvidia took the top spot with a system it assembled with the help of partner CoreWeave, which rents cloud-based instances of Nvidia GPUs.
The Nvidia-CoreWeave system took just under eleven minutes to train using a data set called the Colossal Cleaned Common Crawl. That system made use of 896 Intel Xeon processors and 3,584 Nvidia H100 GPUs. The system carried out the tasks using Nvidia's NeMO framework for generative AI.
The training happens on a portion of the full GPT-3 training, using the "large" version of GPT-3, with 175 billon parameters. MLCommons restricts the test to 0.4% of the full GPT-3 training in order to keep the runtime reasonable for submitters.
Also new this time around was an expanded version of the recommender engines that are popular for things such as product search and social media recommendations. MLCommons replaced the training data set that had been used, which was a one-terabyte data set, with a four-terabyte data set called the Criteo 4TB multi-hot. MLCommons decided to make the upgrade because the smaller data set was becoming obsolete.
"Production recommendation models are increasing in scale — in size, compute, and memory operations," noted the organization.
The only vendor of AI chips to compete against Nvidia was Intel's Habana, which submitted five entries with its Gaudi2 accelerator chip, plus one entry submitted by computer maker SuperMicro using Habana's chip. Those systems collectively submitted in four of the eight tasks. In every case, the Habana systems came in far below the top Nvidia systems. For example, in the test to train Google's BERT neural network on Wikipedia data to answer questions, Habana came in fifth place, taking two minutes to complete the training versus eight seconds for a 3,072-GPU Nvidia-CoreWeave machine.
However, Intel's Jordan Plawner, head of AI products, noted in an interview with ZDNET that for comparable systems, the difference in time between Habana and Nvidia is close enough it may be negligible to many companies.
For example, on the BERT Wikipedia test, an 8-part Habana system, with two companion Intel Xeon processors, came in at just over 14 minutes to train. That result was better than two dozen other submissions, many with double the number of Nvidia A100 GPUs.
Also: To measure ultra-low power AI, MLPerf gets a TinyML benchmark
"We invite everyone to look at the 8-device machines," said Plawner. "We have a considerable price advantage with Gaudi2, where we are priced similar to a similarly spec'd A100, giving you more training per dollar."
Plawner noted that not only is the Gaudi2 able to beat some similar configurations of Nvidia A100, but the Gaudi2 runs with a slight handicap. Nvidia submitted their MLPerf entries using a data format to train called "FP-8," for floating point, 8-bit, whereas Habana used an alternate approach called BF-16, for B-float, 16-bit. The higher arithmetic precision of the BF-16 hampers the training somewhat in terms of time to complete.
Later this year, said Plawner, Gaudi2 will be using FP-8, which he said will allow for greater performance. It will even allow Habana to beat Nvidia's newer H100 system on performance, he predicted.
"The industry needs an alternative" to Nvidia, said Plawner. Customers, while traditionally reluctant to switch from the trusted brand, are now being pushed by a sudden tightness in the supply of Nvidia's parts. CEO Jensen Huang said last month that Nvidia is having a hard time filling demand for H100 GPUs.
"Now they're motivated," Plawner told ZDNET of customers frustrated by lack of Nvidia supply.
"This is what we are hearing from them, that they have things they want to do tomorrow, that the CEO is mandating, and they cannot do it because they cannot get GPUs, period."
"Trust me, they're making far more than they're spending [on generative AI]. If they can put 50 people on a Gaudi project to literally get the same time to train, if the answer is, I have no GPUs, and I'm waiting, or, I have Guadi2, and I can launch my new service tomorrow, they will go buy Gaudi2s to launch their new service."
Intel is the world's second-largest factory for chips, or, "fab", after Taiwan Semiconductor, noted Plawner, which gives the company an ability to control its own supply.
Also: These mushroom-based chips could power your devices
Although Nvidia builds multi-thousand-GPU systems to take the top score, Habana is capable of the same, said Plawner. "Intel is building a mutli-thousand Guadi2 cluster internally," he said, with the implicit suggestion that such a machine could be an entry in a future MLPerf round.
Tuesday's results are the second quarter in a row for the training test in which not a single alternative chip maker showed a top score against Nvidia.
A year ago, Google split the top score with Nvidia thanks to its TPU chip. But Google didn't show up in November of last year, and was again absent this time. And startup Graphcore has also dropped out of the running, focusing on its business rather than showing off test results.
In a phone conversation, MLCommons director David Kanter, asked by ZDNET about the no-show by competitors, remarked, "The more parties that participate, the better."
Google did not reply to an inquiry from ZDNET at press time asking why the company did not participate this time around. Advanced Micro Devices, which competes with Nvidia on GPUs, also did not reply to a request for comment.
Also: These are my 5 favorite AI tools for work
AMD did, however, have its CPU chips represented in systems that competed. However, in a surprising turn of events, every single winning Nvidia system used Intel Xeon CPUs as the host processor. In the year-earlier results, all eight winning entries, whether from Nvidia or Google, were systems using AMD's EPYC server processors. The switch shows that Intel has managed to recoup some lost ground in server processors with this year's release of Sapphire Rapids.
Despite the absence of Google and Graphcore, the test continues to attract new system makers making submissions. This time around, first-time submitters included CoreWeave, but also IEI and Quanta Cloud Technology.
Since its launch, ChatGPT's Achilles heel has been its inability to access information past 2021. To remedy the issue, in May, ChatGPT announced that Microsft Bing would be incorporated into ChatGPT to give the chatbot internet access.
Now that feature is available on mobile.
Also: These two AI models claim to be better than ChatGPT. Here's what we know
On Tuesday, ChatGPT announced updates to its ChatGPT iOS app. The biggest update is that ChatGPT Plus users will be able to access Bing-powered browsing on the app.
To access this feature, Plus users have to go to the "new features" section in their app settings, select GPT-4 in the model switcher, and choose the "Browse with Bing" drop-down, according to the release.
Access to this feature is limited to subscribers on mobile because access to web browsing using Bing even on desktop is an exclusive feature — for now.
Also: 6 skills you need to become an AI prompt engineer
When ChatGPT initially announced the collaboration with Microsoft, the company reassured non-subscribers that "soon" everyone will have access to the feature for free simply by enabling a plugin that will bring Bing to ChatGPT.
The other update is improved search history on mobile. Now when users tap on a search result, they will be taken to the respective point in the conversation.
Although the past few months have seen a surge in AI developments, the global AI arms race has been underway for years. Since 2020, Tortoise Media has produced its annual Global AI Index, which ranks the nations competing for AI dominance — and the latest rankings are in.
Tortoise Media's fourth edition of The Global AI Index, released Wednesday, reflects last year's surge in generative developments spawned by the launch and success of ChatGPT.
To determine how nations rank in terms of AI development, the media company uses three pillars of analysis: investment, innovation, and implementation.
Also: These two AI models claim to be better than ChatGPT. Here's what we know
At the forefront, and leading by a significant stretch, is the United States, assigned a score of 100. The US led in all three pillars, especially in terms of investment due to high scores in the Commercial Investment sub-pillar, which refers to the level of startup activity.
This is fitting as some of the biggest leaders in the generative AI space right now are US companies such as Google, Microsoft, and most importantly, OpenAI.
In second place is China, which scored 62 out of 100. Since 2020, both the US and China have retained their ranks in first and second place 2020. However, there was a shift in ranking for other countries.
The UK shifted from third place in 2020 and 2021 to fourth place and was displaced by Singapore, which saw significant growth over the past few years. In 2020, Singapore was ranked in tenth place, moved up to sixth in 2021, and then in 2023 moved up three more places, scoring a 50 out of 100.
The overall rankings reflect the countries' performance on the AI scale, but it is also worth noting countries that are leading in AI intensity. Singapore, Israel, and Switzerland led in terms of intensity meaning that these countries "perform best when looking at AI capacity relative to their population and economy size", according to the study.
Also: AI ethics toolkit updated to include more assessment components
As the AI race continues to build steam and more countries ramp resources to compete in the AI race, we can expect a continued shift in country rankings next year.
We'll use AI to build up this image element by element. It is, quite simply, a masterpiece of art and design, extraordinarily culturally relevant, somewhat cheeky, and philosophically deep, providing life-changing inspiration to all who view and contemplate its many nuances.
When it comes to generative AI that produces images, there seem to be two main approaches. Tools like Midjourney and Stable Diffusion create entire images based on AI prompts (although they can, with mixed results, sometimes incorporate an existing image into their scene). Adobe Photoshop is pioneering the second approach: Adding an image and fitting it into an existing scene.
Also:These 3 AI tools made my two-minute how-to video way more fun and engaging
As we've discussed previously, Adobe Photoshop has a beta out that adds a powerful feature it calls Generative Fill. I've been exploring this tool for a while, and have had way more fun with it than seems appropriate for a serious technology like AI. Let me demonstrate.
The tools we're going to use
To create our glorious demo image, I used exactly two Photoshop tools. The first is the Lasso tool. This draws a freehand shape on the screen that specifies a selection.
The second tool is the Generative Fill bar. In the new Photoshop beta, this bar shows up whenever you have a new selection. Clicking the button with no text prompt will invoke the AI to fill out what it thinks will look best to complete the selection.
Adding some text as a prompt will instruct the AI, challenging it to create an element of the image as described in the prompt.
How to use Photoshop's AI Generative Fill
Some thoughts
Adobe claims all images are derived from Adobe stock images, which is why the variety isn't as good as with Midjourney. That said, you know you're in safe licensing territory if you build something with Photoshop's Generative Fill, where you don't really know if the image you created in Midjourney was built from some other image its creators found on the internet.
Also:Six skills you need to become an AI prompt engineer
The AI is weirdly fussy about what it will and will not create. Expect to spend some serious time trying multiple generation runs and multiple prompts.
And, finally, as I said before, pay attention to exactly how you size and place your selection. That tells the AI a tremendous amount about your intentions for the images you're asking it to create.
And with that, I'm done. Have you used the Photoshop beta and tried out Generative Fill? Let us know in the comments below. Also, if you feel called to compliment my artistic genius, the comments are there for you to heap your praise on my creation. Finally, if you curate fine art in the Louvre or the Met and you want to contact me for permission to display this great work, you can contact me via ZDNET or my socials.
Also:How to use YouChat as an AI chatbot and search tool
Seriously, though. I think you can probably see how Generative Fill might be a really big help to those using Photoshop who need to add elements. While this image was firmly for fun, I'll be back in future weeks looking at how to use this tool for a more professional result. Stay tuned.
Disclaimer: Using AI-generated images could lead to copyright violations, so people should be cautious if they're using the images for commercial purposes.
You can follow my day-to-day project updates on social media. Be sure to follow me on Twitter at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, and on YouTube at YouTube.com/DavidGewirtzTV.