xAI: Elon Musk’s New AI Venture Unveils Its Mission with Website Launch

Elon Musk's latest venture in the realm of artificial intelligence, xAI, has taken a significant step forward with the launch of its new website. The site provides insights into the company's mission and team, shedding light on Musk's latest attempt to challenge the dominance of major U.S. tech firms in the AI sphere, such as OpenAI.

A New Player in the AI Industry

The website's launch is a clear signal of xAI's ambition to become a major player in the AI industry. The company, which has been operating under the radar until now, is finally ready to showcase its capabilities to the world. The website launch is expected to provide more visibility to the firm's work and attract potential clients and partners.

xAI's mission, as stated on the website, is to “understand the true nature of the universe.” The team is led by Musk and includes members who have worked with other big names in AI, including OpenAI, Google Research, Microsoft Research, and DeepMind. The advisory team includes Dan Hendrycks, a researcher who currently leads the Center for AI Safety, a nonprofit that aims to “reduce societal-scale risks associated with AI.”

The website also announces a Twitter Spaces discussion on July 14th, where listeners can “meet the team and ask us questions.” This move indicates xAI's commitment to community engagement.

Collaboration and Future Prospects

According to the website, xAI operates separately from Musk's overarching X Corp but will work closely with X (Twitter), Tesla, and other companies. This collaboration suggests a synergistic approach to AI development, leveraging the strengths and resources of these different entities.

The launch of xAI's website comes at a time when the AI industry is witnessing rapid growth and transformation. AI technologies are being increasingly adopted across various sectors, from healthcare to finance, and are playing a crucial role in driving innovation and efficiency. In this context, xAI's entry into the market is expected to further fuel the growth and development of the AI industry.

Tthe launch of xAI's website marks a significant step in the firm's journey. It signals the firm's readiness to take on established players in the AI industry and showcases its ambition to become a major player in the industry. As the AI industry continues to grow and evolve, the entry of firms like xAI is expected to further fuel innovation and competition in the market.

DataRobot CEO Sees Success at Junction of Gen AI and ‘Classical AI’

DataRobot CEO Sees Success at Junction of Gen AI and ‘Classical AI’ July 12, 2023 by Alex Woodie

What’s the next generation of enterprise AI going to look like? If you ask DataRobot CEO Debanjan Saha, enterprises will see the most business benefits by combining new generative AI tools and techniques with the classical AI and machine learning approaches that customers have honed over the past decade.

Saha, who joined DataRobot as president and COO about 18 months ago, brings a long track record of building enterprise data products for some of the biggest companies on the planet, including Google Cloud, where he was VP and GM of the data analytics group, and Amazon Web Services, where he oversaw Aurora and RDS.

Saha brings a no-nonsense engineer’s perspective to the chief executive’s office, which he has occupied since the beginning of July 2022. During the past six months, he’s embarked upon a whirlwind tour that had him visit 100 customers in 20 cities around the world. That tour has been quite informative, especially when it comes to generative AI and large language model (LLM) technologies such as ChatGPT.

“They’re all excited about it,” Saha told Datanami in a recent interview. “They’re anxious about it because they know that their board is asking about it. Their CEO is asking about it. I’m talking to the board members and the CEOs and they’re trying to figure out ‘Okay, this is great, but I mean, how many chatbots are we going to make?’”

There’s no denying the impact that ChatGPT has had on the field of AI. After all, we’re living through AI’s iPhone moment, Saha said. After years of struggling to find a way to successful work machine learning and other forms of AI into the enterprise, ChatGPT has put AI on people’s map in a big way.

“A lot of people thought of AI as one of these novel, esoteric technologies. They didn’t quite understand what AI can do,” he said. “Now everybody does, right? And that has kind of changed the momentum.”

Unfortunately, when it comes to actually delivering business value, there’s no real “there” there with the latest round of Gen AI and LLM technology, at least not yet. “I think we’re a little ahead of our skis,” Saha said.

ChatGPT is the “iPhone moment” for AI.(SomYuZu/Shutterstock)

While Saha is grateful that advances in AI are finally getting the wider recognition they deserve, there’s still quite a bit of work to do to fully integrate it into the enterprise.

“In my view, I think the proof is going to be in the pudding,” he said. “All the euphoria is going to last for so long. Ultimately, the business needs to show value from AI, and generative AI is no exception. Otherwise, we are going to be the same situation we have been with AI right now.”

The problem is that the track record for traditional machine learning and what he termed “classical AI” is not great. There are numerous studies showing that only a small number of enterprises (usually the larger ones) have been able to reap the rewards from AI and ML. Most have been stuck in mud, with questionable data and haphazard processes around the ML and AI workflows.

“AI has been around for a very long time and people have been using AI in various different ways and various different places for a long time,” Saha said. “To tell you the truth, in my view, it hasn’t really lived up to the expectation with respect to creating business impact that people thought AI can create.”

While Gen AI and LLMs have basically broken the hype meters over the past eight months, they won’t solve the AI struggles enterprises have gone through over the past 10 years. That doesn’t mean they don’t have value. But according to Saha, generative AI apps built on LLMs will comprise perhaps 10 to 20% of the overall AI solution.

Debanjan Saha just completed his first year as CEO of DataRobot.

“What I’ve seen is people taking generic LLMs and making them more subject matter experts in specific areas,” he said. “Everybody will have Langchain and they’ll figure out how to use that data to either fine tune the model or in many cases use a nice prompting strategy to make them more knowledgeable about a specific area.”

But that’s not where the real action is going to be, he said. “That’s a component. [But] I think ultimately it’s going to be combination of generative AI and predictive AI and finding the right use case, doing the right problem framing, and then figuring out where the ROI is going to be from this,” he said.

The bulk of the action in successful enterprise AI strategies, Saha said, will involve a lot of hard work. It will involve mapping AI tech to the specific business problem that the enterprise faces. It will require building a robust data pipeline to feed models. And it will require creating resilient workflows to handle the training, deployment, and monitoring of the AI models. And lastly it will require integration with the rest of the business processes and applications. In short, all the same stuff that has tripped up classical AI adopters for the past decade.

While the AI tech has advanced, there won’t be any shortcuts to doing the work of i ntegrating it into the enterprise, Saha said. The hyperscalers will provide some solutions, but they’ll lock you into their cloud and they’ll also require technical skills to integrate the pre-built components into your specific environment.

Enterprises will be able to buy off-the-shelf AI apps from vendors, but they will be of limited value since they will only focus on a specific area. “It is covering only one use case, and if you want to cover everything that you do in the enterprise, [you’ll need] maybe couple of hundred of those in order to build the entire folio, which is not an easy thing to do either,” Saha said.

Naturally, Saha sees a large opportunity for DataRobot and other vendors in the AI space who can help enterprises connect the dots and build end-to-end AI solutions.

“Our strategy has been–and this is what DataRobot has done successfully with classical AI–is, how do you make it easy for people to get value out of generative AI? And not just generative AI, but generative AI and predictive AI together?”

While the DataRobot platform was originally built for predictive AI, the company is actively morphing it to handle new generative AI use cases. It won’t require major tweaks, Saha said, because many of the AI processes that DataRobot has already automated for predictive AI—from data prep to model monitoring–can be used for generative AI workloads, too.

Many of the LLMs that enterprises want to use are open source and available from sources like Huggingface and GitHub, Saha said. And if a DataRobot customer wants to tap into GPT-4 or other LLMs from Google, they have the option of using APIs within the DataRobot platform, he said.

To help customers understand how the various LLMs are running on their data, DataRobot will deliver a leaderboard. That product is currently under development, and could be announced next month, Saha said.

Saha sees the combination of predictive AI and generative AI paying dividends for his customers. In many cases, generative AI functions as the “last mile,” connecting the customer with the insight generated from the predictive AI.

For example, one of DataRobot’s customers uses predictive AI model to determine whether a specific customer is likely to churn. When the model spots a customer that fits the profile, it triggers a generative AI workflow that sends a customized email to the customer or surfaces a script to an agent to address the concern.

Another DataRobot customer uses the two types of AI in a hospital setting. The predictive AI model does the hard work of combining various data points to determine the likelihood of a patient being readmitted. Then the generative AI model takes that output and generates an English language explanation of the readmission calculation, which is included with the patient discharge paperwork.

“Those are the kinds of things that could be really, really interesting,” Saha said. “There are tons and tons of use cases of that type.”

DataRobot has about 1,000 customers, and it will be working with them to implement generative AI into their workflows. Smaller firms like DataRobot have a big advantage over cloud giants like Google and AWS as far as actually working with customers on their particular problems, as opposed to selling them a set of do-it-yourself “Lego blocks,” Saha said.

But the shift from purely predictive AI to a combination of predictive and generative AI will also help DataRobot target new customers who want repeatable AI processes instead of ad hoc AI mayhem. It will also allow DataRobot to target a new class of users, Saha said.

“I do think that’s going to increase the aperture in terms of the business outcome,” he said. “Its not just people who deal with data and data science–it’s a much broader section of user base who now will be able to generative AI and AI in general.”

This article first appeared on sister site Datanami.

Related

About the author: Alex Woodie

Alex Woodie has written about IT as a technology journalist for more than a decade. He brings extensive experience from the IBM midrange marketplace, including topics such as servers, ERP applications, programming, databases, security, high availability, storage, business intelligence, cloud, and mobile enablement. He resides in the San Diego area.

Tech Cos Must Tread Slowly on AI Path to Dodge Economic Apocalypse

Four months after Bengaluru-based finance start-up Dukaan laid off 90% of its support team because of an AI chatbot, the company’s CEO, Suumit Shah, decided to announce this on Twitter to boast the profitable impact of the company’s previous decision.

We had to layoff 90% of our support team because of this AI chatbot.
Tough? Yes. Necessary? Absolutely.
The results?
Time to first response went from 1m 44s to INSTANT!
Resolution time went from 2h 13m to 3m 12s
Customer support costs reduced by ~85%
Here's how's we did it 🧵

— Suumit Shah (@suumitshah) July 10, 2023

Referring to the transformation in the company’s performance, he highlighted that the time required for the initial response had reduced from 1 minute and 44 seconds to an instant. Further, the resolution time had slashed from an average of 2 hours and 13 minutes to 3 minutes and 12 seconds. “Given the state of the economy, startups are prioritising “profitability” over striving to become ‘unicorns’, and so are we,” Shah said.

Dukaan is not the only one to take a huge step amid the ongoing AI revolution due to the dominance of generative AI products and services in the market. Veteran companies like IBM have also taken decisions to let go of human resources gradually and let AI be integrated into the workforce.

Not in favour

The decision to automate several departments is not going to turn out in their favour. The recent Harvard Business Review study ‘Companies That Replace People with AI Will Get Left Behind’, explains so by drawing parallels between the previous tech revolutions at different time periods in history and the ongoing AI wave.

Historically, the study states that industries have never gone through a macro-level unemployment from new technologies, so AI is unlikely to take over people’s jobs in the long term. However, as companies are adopting generative AI remarkably fast, a substantial job displacement in the short term will be visible.

Michael Irwin Jordan, the professor at UC Berkeley and AI visionary Andrew Ng’s mentor told AIM that if tech companies are going to put people out of jobs, “that’s maybe okay”. But they should not do it so fast that it breaks the economic system. It should be done in a staged way so that people have 10 or 20 years to realise what’s happening and make plans around that.

Along similar lines, the HBR study noted AI is different [than the spread of electricity and computing] because companies are integrating it into their operations so quickly that job losses are likely to mount before the gains arrive. White-collar workers might be especially vulnerable in the short-term.

Probable Solutions

Due to the neck to neck competition in the industry, every company is striving to become AI first, probable solutions have been proposed and are being executed by stakeholders in the situation. Several companies like Intuit have begun to think about reskilling programs to retain its workforce with the evolving technology landscape.

In June 2020, the financial software company laid off over 715 employees, while announcing it would add over 700 roles to become an AI-powered company. Since then it has developed programs to upskill employees for technical roles like its signature six-month A.I. course and apprenticeship program.

The AI situation also needs the government to step in either to slow down the commercial adoption of AI (highly unlikely), or to offer special welfare programs to support and retrain the newly unemployed, suggested the HBR study. While conversations are taking place, no concrete approach has been finalised with the help of government intervention.

Globally, several steps of the big tech companies behind these technologies have been criticised heavily amid the continued bloodbath of mass layoffs. Yet no leader has been held accountable for the job losses or unethical restructuring of these organisations.

The recent layoff at Dukaan has caused an outrage and Shah is being called out for being insensitive towards the laid off employees. Whereas IBM’s decision to pause hiring for roles that could be replaced in the coming years looks like a better choice. The bottom line is, it is high time companies and the government harness the power of AI collectively rather than jumping on the bandwagon yet getting left behind.

The post Tech Cos Must Tread Slowly on AI Path to Dodge Economic Apocalypse appeared first on Analytics India Magazine.

Anthropic’s updated ChatGPT-rival offers more detailed, less offensive responses

Anthropic logo

Less than 6 months after debuting its ChatGPT rival, Claude, Anthropic is rolling out an updated version. And this time, the company is promising the chatbot will provide more detailed answers, fewer harmful responses, improved reasoning behind answers, and simply overall better performance than before.

Also: How to use ChatGPT: Everything you need to know

The company says Claude 2 is easier to converse with and better explains its thinking – acting more like a real human.

Available in the United States and the UK today, Claude 2 packs a lot more power than its predecessor.

In the world of AI, user input is broken down into "tokens," or bits of information broken down for the AI to process. The updated Claude can process 100,000 tokens, a significant upgrade from the 9,000 of the previous version.

That means Claude 2 can understand and process over 70,000 words at once, leading to better and longer responses. The AI can process hundreds of pages of documents, Anthropic says, and output much longer documents, including short stories.

Also: Six skills you need to become an AI prompt engineer

Further showing how intelligent its new AI is, Anthropic noted the latest model of Claude scored a 76.5% on the multiple-choice portion of the bar exam, an improvement from the previous version's score of 73%. New Claude also brings greatly improved coding skills, Anthropic claims. Its original version scored a 56% on the Codex HumanEval (a Python coding test) while the new version jumped to a 71%.

Claude 2 is also significantly safer. Anthropic admits that "no model is immune from jailbreaks," but says it's harder to prompt its new model to issue an offensive or dangerous response. In fact, the company said, according to its manual analysis, Claude 2 was twice as good at giving safe responses.

The new AI can be accessed via API or through a dedicated site, claude.ai.

Featured

Elon Musk wants to build AI to ‘understand the true nature of the universe’

Elon Musk wants to build AI to ‘understand the true nature of the universe’ Kyle Wiggers 7 hours

Elon Musk, Twitter’s not-so-benevolent CEO, today announced the launch of a new organization, xAI, to “understand the true nature of the universe.”

Ambitious? A little. So how does xAI plan to achieve it? More details will be announced during a Twitter Spaces on Friday, apparently. But xAI’s splash page reveals that the team — 12 people strong, currently — will be led by Musk and veterans of DeepMind, OpenAI, Google Research, Microsoft Research, Tesla and the University of Toronto.

xAI’s namesake is X Corp, the name assigned to Twitter since early April, along with the “X” label Musk has applied to his vision of an “everything app.”

xAI will be advised by Dan Hendrycks, the director at the Center for AI Safety, an AI research nonprofit. And it’ll work with Twitter and one of Musk’s other companies, Tesla, to “make progress towards [its] mission” — whatever that mission ends up being.

While Musk’s saving the juicy tidbits for Friday, xAI’s roster gives some clues as to what the organization’s work might entail. Many of the founding members specialize in large language models along the lines of OpenAI’s GPT-4 or have experience in techniques like reinforcement learning, which teaches AI models to accomplish tasks by rewarding them for performing those tasks.

Announcing formation of @xAI to understand reality

— Elon Musk (@elonmusk) July 12, 2023

In an interview with Tucker Carlson in April, Musk said that he wanted to build what he referred to as “TruthGPT,” a “maximum-truth-seeking AI.” Likely, xAI’s efforts will start there, perhaps taking the form of a text-generating AI that Musk perceives as more “truthful” than existing ones; Musk has previously suggested that Twitter would use its own data to train ChatGPT-like tech.

Rumors about Musk starting up an AI company have been floating around for some time, with a report earlier this year from Business Insider revealing that Musk had purchased thousands of GPUs to power an upcoming generative AI product. The Financial Times similarly reported that Musk planned to create an AI firm to compete with the Microsoft-backed OpenAI, even reportedly seeking funding from SpaceX and Tesla investors to get the company started.

Musk’s AI ambitions have grown since the billionaire’s split with OpenAI co-founders Sam Altman and Ilya Sutskever several years ago. Launched as a nonprofit in 2015, OpenAI took on $1 billion in donations from Musk and others.

As OpenAI’s focus shifted from open source research to primarily commercial projects, Musk grew disillusioned — and competitive — with the company on whose board he sat. He poached key employees from OpenAI to work on Autopilot, Tesla’s driver assistance tech. And he became openly critical of OpenAI, referring to it at one point as a “profit-maximizing demon from hell.”

Musk resigned from the OpenAI board in 2018. And more recently, he cut off the company’s access to Twitter data, arguing that OpenAI, which had a $2-million-a-year licensing deal with Twitter established before Musk came aboard as CEO, wasn’t paying enough.

TCS Engaged in 50+ Generative AI Proof of Concepts and Pilots: CEO

TCS is engaged in over 50 generative AI proof of concept (PoC) and pilots and has over 100 opportunities in the pipeline, TCS CEO K. Krithivasan said during the FY 2023-24 financial results announcement.

“Our experience in building several generative powered use cases has shown that the full potential of generative AI is better realised through an enterprise-wide initiative rather than isolated use cases requiring a deep partnership between business legal risk and compliance and as well as data and technology teams,” he said.

Earlier this month, TCS announced plans to significantly scale its Azure Open AI expertise and plans to get 25,000 associates trained and certified on Azure Open AI to help clients accelerate their adoption of this powerful new technology.

The IT giant also plans to launch its new Generative AI Enterprise Adoption offering on Microsoft Cloud to help customers jumpstart their generative AI journey.

This initiative aims to assist clients in improving customer experiences, introducing new business models, increasing revenue, and boosting productivity.

By leveraging TCS’ extensive knowledge and resources, this offering helps organisations harness the power of Generative AI for their overall growth and success.

In Mat, TCS also announced that it is partnering with Google Cloud to leverage its generative AI services, alongside designing and deploying custom business solutions for their customers.

The company, in its press release, said that this new offering is powered by Google Cloud’s Generative AI tools – Vertex AI, Generative AI Application Builder and Model Garden, and TCS’ own solutions. TCS will work with its customers to custom build their solutions based on the context provided by them.

The post TCS Engaged in 50+ Generative AI Proof of Concepts and Pilots: CEO appeared first on Analytics India Magazine.

¡Hola Alexa! Amazon expands AI-powered English lessons for Spanish speakers

Senior woman smiling happily while using smart devices in her kitchen. Cheerful elderly woman using a home assistant to perform tasks at home.

The goal is to help Spanish speakers learn English organically, from the comfort of their own homes — and from a device that many likely already own.

This past January, Amazon launched an Alexa-based English learning program for Spanish speakers in Spain, and now it's expanding that program into the US and Mexico. The program employs artificial intelligence (AI) technology to provide learners with feedback on mispronunciations, while intelligently discerning accents.

The goal is to help Spanish speakers learn English organically, from the comfort of their own homes — and from a device that many likely already own, like an Echo speaker. Instead of workbook lessons on English vocabulary and grammar, Alexa takes a conversational approach to the language.

Also: How to make Alexa bilingual

Alexa, Amazon's popular voice assistant, employs a combination of natural language processing, machine learning, and speech recognition to interact with users. Taking it a step further, the system uses a phonetic recurrent-neural-network-transducer model to predict phonemes, the smallest units of speech.

This technology stack enables Alexa to distinguish between similar-sounding phonemes from different languages and capture frequent mispronunciation patterns from training data. On the user's side, Spanish speakers will see structured lessons and a unique pronunciation feature with real-time feedback.

"When a language learner practices an English word or phrase, Alexa listens to the pronunciation to check for any mistakes," Animish Sivaramakrishnan, senior product manager, Alexa, explained to ZDNET. "If Alexa detects mispronunciations, Alexa brings this to the learner's attention for them to re-try. When pointing out and correcting mispronounced phrases, Alexa will slow down its pace to make it easier for users to listen to the phrase and correct the pronunciation."

Also: I found the weirdest tech deals for Prime Day, and they're actually useful

This system will disambiguate similar-sounding phonemes in accents from mispronounced words, correctly learning and pointing out only the mistakes in mispronunciations, while respecting the user's accent.

"On Echo Show devices, correctly pronounced words or phrases are highlighted in blue, while mispronunciations are highlighted in red," Sivaramakrishnan added.

Also: The best smart speakers: Sonos, Amazon Echo, Apple HomePod, and more compared

Currently, this feature is available only for Spanish speakers that want to learn English; it will be available on all generations of Echo and Echo Show devices and via the Alexa mobile app.

Amazon

The Real Reason Behind ChatGPT User Decline   

For the first time since its launch in November, the AI chatbot ChatGPT experienced a decline in website visits, suggesting a potential decrease in consumer interest towards AI chatbots and image-generators. According to SimilarWeb, global desktop and mobile traffic to the ChatGPT website witnessed a decline of 9.7% in June compared to May, while unique visitors to the website dropped by 5.7%. Additionally, the data reveals an 8.5% decrease in the amount of time visitors spent on the website.

But why? From being a general purpose chatbot to becoming a specialised platform, ChatGPT has come a long way from where it all started. Thanks to its recent ChatGPT Plugins and Code Interpreter upgrades, the chatbot is slowly but surely moving away from general purpose use cases, as users are now realising its true potential, moving towards subscription based models (ChatGPT Plus – powered by both GPT-3.5 and GPT-4), alongside integration of ChatGPT API.

Recently, OpenAI announced its API access to more customisable versions of its software to other technology companies and corporate enterprises. Incidenly, the traffic to the platform.openai.com developer’s website was up 3.1% from May to June.

However, speculations are brewing from all corners as to why ChatGPT is facing these circumstances. Some experts argue that the newness of generative tools might be wearing off. While the Washington Post claims that the reduced usage of ChatGPT could be because of the vacation time off taken by school and college goers. One of the other speculations is AI hallucinations, where the chatbot generates inaccurate or unsatisfactory responses.

When you do the maths, a paid subscription to ChatGPT Plus provides users access to a better experience i.e. ChatGPT Plus (based on GPT-4) and they are migrating to the paid version to utilise the tool to its full potential. In other words, free users have realised that the full potential of ChatGPT is only available through GPT-4. Ergo, the decline is possibly only the free users.

It also has to be noted that OpenAI has not made any official announcement about the dip in users on ChatGPT yet, which opens doors for more speculations. But the real reason for the decline in website traffic could be due to paid subscription ChatGPT Plus and API availability.

The same was witnessed in the case of text-to-image platform Midjourney, which also saw a dip in its users after it culled its free service, and transitioned completely to paid subscription plans.

However, the drop in users in ChatGPT happens to be common between the free and the paid versions, as both are accessed on the same website. On the other hand, Midjourney has entirely stopped the free version.

Is ChatGPT Next?

While the dip in users continues to grow, the real question one needs to ask is how long before the free ChatGPT (based on GPT-3.5) service ends. “We love our free users and will continue to offer free access to ChatGPT. By offering this subscription pricing, we will be able to help support free access availability to as many people as possible,” the company said in its blog post.

Ironically, the ChatGPT subscribers get GPT-4, while free users get an older version (GPT-3.5). The sustainability of that as a business model is questionable, given that people can get GPT-4 for free as part of Bing. Minus, ChatGPT Plugins and Code Interpreter.

With ChatGPT Plus, OpenAI might be looking for conversions—– from free to paid users. And it is already happening, and decline in users is an early sign of that conversion.

Hopefully, this may also help reduce the operational and upkeep costs of ChatGPT which Sam Altman, OpenAI’s CEO, has described as “eye-watering.”

Competition Galores

Recently, San-Francisco-based AI lab Anthropic has announced Claude 2, a new ChatGPT rival open to the public in the US and the UK. One of the chatbot’s beta testers, Ethan Mollick, an Associate Professor at the Wharton School of the University said in a LinkedIn post, it has two big advantages over the other models: it is very good at handling documents (especially PDFs, which GPT struggles with) and shows a very sophisticated “understanding” of documents. Furthermore, it continues to be the most “pleasant” AI personality.

At the same with alternatives like Bard and Pi, it’s not going to be easy for free ChatGPT (based on GPT 3.5) to retain the users unless it reinvents itself and comes up with new features for them as well. This includes giving ChatGPT Plugins and ChatGPT Interpreter access with limited prompts, etc, which would in turn increase users as well conversions.

All is not lost as this puts OpenAI in a slightly advantageous position. While the paid subscribers are growing, the slowdown in user engagement on the free version could mean well for the company. This builds an ecosystem of quality customers, and users look for more specialised use cases.

The post The Real Reason Behind ChatGPT User Decline appeared first on Analytics India Magazine.

7 Steps to Mastering Data Science Project Management with Agile

7 Steps to Mastering Data Science Project Management with Agile
Image by Author What is Agile?

The Agile methodology was discovered in early 2001 when 17 people came together to discuss the future of software development. It was founded on 4 core values and 12 principles.

It is very popular in the fast-paced, ever-changing technology industry — which reflects it nicely. It is a perfect data science project management method as it allows team members to continuously review the requirements of the project, go back and forth, and communicate more as the project grows. The model evolves to reflect user-focused outputs, saving time, money and energy.

It is better to make decisions about changes during the different phases in a data science lifecycle, rather than at the end once it's all complete. Let’s speak on the 2 steps that you can take to kickstart your agile data science project management.

Scrum

An example of an agile method is Scrum. The scrum method uses a framework that helps to create structure in a team using a set of values, principles, and practices.

For example, using Scrum, a data science project can break up its larger project into a series of smaller projects. Each of these mini-projects will be called a sprint and will consist of sprint planning to define objectives, requirements, responsibilities and more.

Why is this beneficial? Because it helps different members of the team be accountable and responsible for their tasks to complete a sprint. Completed sprints all play a major role in the end goal of the business, for example launching a new product.

Employees focus on delivering value to the end-users by being able to discover solutions to challenges that they may come across through the sprints.

Tools for scrum include:

  • Monday.com
  • ProjectManager
  • Jira

Kanban

Kanban is another example of an agile method. It is a popular framework which originates from a Japanese inventory management system. Kanban shows employees a visual status of their current and pending tasks. Each task, also known as the Kanban card, is shown on the Kanban status board and represents its life cycle to completion.

For example, you can have life cycle columns such as work in progress, developed, tested, completed, etc. This can help data scientists identify bottlenecks earlier on and reduce the level of tasks work in progress.

Kanban is deemed to be a very popular framework in the data science world, with a lot of data enthusiasts adopting the method. It is a lightweight process that has a visual nature to improve the workflow and identify any challenges easily. It is a method that is easy to implement, and data scientists respond very well to ‘What is your next task?’, rather than ‘What tasks do you have in your next sprint?’.

Tools for Kanban include:

  • Trello
  • Monday.com
  • ProofHub

Walk Before You Run

The first initial step in the agile methodology is to plan. Plan, plan, plan! I can’t stress enough how important this point is, and that's why it is important to learn how to walk before you run. Having a tool such as Monday or Jira is great, but you will get nowhere if you do not plan.

Holding discussion sessions between you and your employees, so that everybody is on the same page, everybody understands what needs to be done, and everybody has the same plan in their head is essential. Lack of planning can lead to missed deadlines, lack of motivation and employee productivity, as well as project infeasibility.

Once everybody is on the same plate, you can then move on to the next step.

Design as a Team

The next phase is designing your project, and this is based on the conversations you had with your employees. All the aspects your team covered in your planning discussions will help you design an effective solution to your task at hand.

Communicating is your biggest tool during this phase. Other members of your team may have different ways of working or compartmentalizing tasks. Therefore, it is your responsibility as team members to design a solution that caters to everybody's needs, based on their method of working, availability, etc.

During this phase, you can allocate who will be owning which aspect of a project. This gives employees a sense of importance, which increases their productivity levels. Once an employee has been given ownership of a part of a task, it is their responsibility to ensure that it runs smoothly, meets deadlines, and everything goes as planned.

Develop your Solution

This is where your discussions, planning and design show. You may think at this point you do not need to communicate with your team members anymore, and you can just get to work. That is not true. This is where communication matters the most. Weekly stand-ups are important, it helps all employees stay in the loop and bounce off one another.

During the development of your solution for your task at hand, you will come across challenges or bottlenecks which can be very overwhelming and will alter your timeline, and other people's ability to complete their tasks. Communicating every successful and unsuccessful step is important to keep all members in the loop, and it allows people to give you a helping hand.

Test, Test, Test

If you’re working on analyzing data, creating an algorithm, or producing a new product for the business — you will want to test it. And then test it again, and definitely test it some more.

There is no harm in making sure that you’re as accurate as possible when it comes to data science projects. Not only did team members invest their time and energy into this solution, it would be even better if it is accurate and solves the problem at hand.

The last thing you want to do is go back and forth because your results are not as accurate as they were in the 1st round.

Deploy

One of the proudest moments during a data science project. Communicating with team members to put together the latest increment into production, before it is available to live users.

Data scientists need to put their minds in a place as if they were handing the solution over to the customer next. Reviewing, documenting, fixing, and discussing the whole data science project, and the highs and lows is important.

Because let’s face it, a similar project will arise and rather than having to start from scratch — you have documentation of your previous projects to provide you with a stepping stone for your next project. It is these reviews and documents which will be used in the first step of discussing/planning your next data science project.

Wrapping it up

Ensuring that you have the right tools to be successful in your agile data science project management is one thing. But being able to get the most out of each phase of it is even more important. Communication is important, which you will know now as I mentioned a thousand times. But just to remind you, to reap the rewards, you have to work hard but that comes with a lot of communication.
Nisha Arya is a Data Scientist, Freelance Technical Writer and Community Manager at KDnuggets. She is particularly interested in providing Data Science career advice or tutorials and theory based knowledge around Data Science. She also wishes to explore the different ways Artificial Intelligence is/can benefit the longevity of human life. A keen learner, seeking to broaden her tech knowledge and writing skills, whilst helping guide others.

More On This Topic

  • 7 Steps to Mastering SQL for Data Science
  • 7 Steps to Mastering Python for Data Science
  • 7 Tips for Data Science Project Management
  • A Guide to Data Science Project Management Methodologies
  • 4 Steps for Managing a Data Science Project
  • KDnuggets™ News 22:n05, Feb 2: 7 Steps to Mastering Machine Learning…

Wipro eyes new opportunities with $1B investment in AI

hand touching screen

Wipro is setting aside $1 billion to build up its capabilities and tap new opportunities powered by artificial intelligence (AI).

Spread across three years, the investment will include training all of the Indian IT consulting firm's 250,000 employees on the fundamentals of AI, and its responsible use. This process will be carried out over the next 12 months, with "more customized" training for employees in AI-specific roles to be provided on an ongoing basis. A curriculum will also be developed to map out the AI journey for different roles.

Also: Generative AI can make some workers a lot more productive, according to this study

These plans will be necessary to harness "a new era of value, productivity, and commercial opportunities" through AI and generative AI, Wipro said in a statement. The company's goals include advancing its AI, data and analytics foundation, research, and FullStride Cloud platform.

Wipro will look to build new consulting capabilities to guide clients in leveraging AI, it said. In addition, its investment arm Wipro Ventures will launch a seed accelerator scheme to identify generative AI startups and provide them with the necessary training to be enterprise-ready. Investments in AI startups will also be intensified.

At the core of Wipro's AI initiatives is a new platform, known as Wipro ai360, which the vendor is touting as the catalyst to integrate AI into "every platform, every tool, and every solution" it uses internally, and that will also be offered to customers.

Also: How does ChatGPT work?

The platform builds on Wipro's "decades-long" investments in AI and is supported by a team of more than 30,000 that includes data scientists, data architects, and visualization and design specialists. These professionals will work across Wipro's four global business lines centered on cloud, engineering, consulting, and business and technology transformation.

The company's innovation facility Lab45 also will play a key role in supporting Wipro ai360 with skillsets, training, and research.

"Especially with the emergence of generative AI, we expect a fundamental shift up ahead, for all industries," said Wipro CEO and Managing Director Thierry Delaporte. "New business models, new ways of working, and new challenges, too.

"This is exactly why Wipro's ai360 ecosystem places responsible AI operations at the heart of all our AI work. It's meant to empower our talent pool and be ubiquitous across all our operations and processes, as well as our solutioning for clients."

Also: Today's AI boom will amplify social problems if we don't act now, says AI ethicist

The Indian IT giant joins other major players in economic powerhouses China and the U.S. that are making big bets on AI amid the emergence of generative AI. China's Alibaba Cloud, for instance, introduced a partnership program to drive the development of AI applications for verticals, including finance and petrochemicals.

Its large language model, Tongyi Qianwen, was launched in April and is expected to be integrated with all of Alibaba's own business applications, including e-commerce, search, navigation, entertainment, enterprise communication, and intelligence voice assistance.

Microsoft CEO Satya Nadella early this year also said the US software vendor will "incorporate AI in every layer of the stack", and pointed to AI as the "next big platform wave".

Artificial Intelligence